Test Report: Docker_Windows 12739

                    
                      1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f:2022-05-16:23976
                    
                

Test fail (150/219)

Order failed test Duration
20 TestOffline 96.23
22 TestAddons/Setup 78.1
23 TestCertOptions 101.63
24 TestCertExpiration 392.38
25 TestDockerFlags 100.86
26 TestForceSystemdFlag 98.42
27 TestForceSystemdEnv 97.64
32 TestErrorSpam/setup 77.92
41 TestFunctional/serial/StartWithProxy 81.05
42 TestFunctional/serial/AuditLog 0
43 TestFunctional/serial/SoftStart 116.77
44 TestFunctional/serial/KubeContext 4.17
45 TestFunctional/serial/KubectlGetPods 4.18
52 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 3.04
53 TestFunctional/serial/CacheCmd/cache/cache_reload 11.98
55 TestFunctional/serial/MinikubeKubectlCmd 5.89
56 TestFunctional/serial/MinikubeKubectlCmdDirectly 5.88
57 TestFunctional/serial/ExtraConfig 116.89
58 TestFunctional/serial/ComponentHealth 4.23
59 TestFunctional/serial/LogsCmd 3.51
60 TestFunctional/serial/LogsFileCmd 4.34
66 TestFunctional/parallel/StatusCmd 13.17
69 TestFunctional/parallel/ServiceCmd 5.38
70 TestFunctional/parallel/ServiceCmdConnect 5.54
72 TestFunctional/parallel/PersistentVolumeClaim 4.14
74 TestFunctional/parallel/SSHCmd 10.78
75 TestFunctional/parallel/CpCmd 12.77
76 TestFunctional/parallel/MySQL 4.47
77 TestFunctional/parallel/FileSync 7.31
78 TestFunctional/parallel/CertSync 23.45
82 TestFunctional/parallel/NodeLabels 4.4
84 TestFunctional/parallel/NonActiveRuntimeDisabled 3.29
89 TestFunctional/parallel/DockerEnv/powershell 10.11
91 TestFunctional/parallel/Version/components 3.21
95 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
101 TestFunctional/parallel/ImageCommands/ImageListShort 2.91
102 TestFunctional/parallel/ImageCommands/ImageListTable 2.9
103 TestFunctional/parallel/ImageCommands/ImageListJson 2.93
104 TestFunctional/parallel/ImageCommands/ImageListYaml 2.96
105 TestFunctional/parallel/ImageCommands/ImageBuild 8.86
106 TestFunctional/parallel/ImageCommands/Setup 2.06
107 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.04
108 TestFunctional/parallel/UpdateContextCmd/no_changes 3.16
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 3.26
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 3.17
111 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.07
112 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.03
113 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.98
115 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.26
116 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.1
122 TestIngressAddonLegacy/StartLegacyK8sCluster 79.4
124 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 7.04
126 TestIngressAddonLegacy/serial/ValidateIngressAddons 3.83
129 TestJSONOutput/start/Command 77.82
130 TestJSONOutput/start/Audit 0
132 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
133 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0.01
135 TestJSONOutput/pause/Command 3.09
136 TestJSONOutput/pause/Audit 0
141 TestJSONOutput/unpause/Command 3.03
142 TestJSONOutput/unpause/Audit 0
147 TestJSONOutput/stop/Command 22.04
148 TestJSONOutput/stop/Audit 0
150 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0.01
154 TestKicCustomNetwork/create_custom_network 244.08
156 TestKicExistingNetwork 7.33
157 TestKicCustomSubnet 234.95
161 TestMountStart/serial/StartWithMountFirst 81.59
164 TestMultiNode/serial/FreshStart2Nodes 81.52
165 TestMultiNode/serial/DeployApp2Nodes 16.77
166 TestMultiNode/serial/PingHostFrom2Pods 5.85
167 TestMultiNode/serial/AddNode 6.92
168 TestMultiNode/serial/ProfileList 7.71
169 TestMultiNode/serial/CopyFile 6.68
170 TestMultiNode/serial/StopNode 10.15
171 TestMultiNode/serial/StartAfterStop 8.32
172 TestMultiNode/serial/RestartKeepsNodes 140.16
173 TestMultiNode/serial/DeleteNode 9.96
174 TestMultiNode/serial/StopMultiNode 31.55
175 TestMultiNode/serial/RestartMultiNode 118.31
176 TestMultiNode/serial/ValidateNameConflict 170.79
180 TestPreload 90.12
181 TestScheduledStopWindows 89.38
183 TestSkaffold 90.82
185 TestInsufficientStorage 32.55
186 TestRunningBinaryUpgrade 373.13
188 TestKubernetesUpgrade 116.81
189 TestMissingContainerUpgrade 371.33
193 TestStoppedBinaryUpgrade/Upgrade 331.66
194 TestNoKubernetes/serial/StartWithK8s 86.32
195 TestNoKubernetes/serial/StartWithStopK8s 120.84
203 TestNoKubernetes/serial/Start 96.25
205 TestPause/serial/Start 85.38
206 TestStoppedBinaryUpgrade/MinikubeLogs 3.22
219 TestStartStop/group/old-k8s-version/serial/FirstStart 86.28
221 TestStartStop/group/no-preload/serial/FirstStart 85.03
223 TestStartStop/group/embed-certs/serial/FirstStart 84.67
224 TestStartStop/group/old-k8s-version/serial/DeployApp 8.39
225 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 7.21
226 TestStartStop/group/old-k8s-version/serial/Stop 26.71
227 TestStartStop/group/no-preload/serial/DeployApp 8.23
228 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 7.19
229 TestStartStop/group/no-preload/serial/Stop 26.57
230 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 9.85
231 TestStartStop/group/old-k8s-version/serial/SecondStart 121.93
232 TestStartStop/group/embed-certs/serial/DeployApp 8.24
233 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 7.23
234 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 9.95
235 TestStartStop/group/embed-certs/serial/Stop 26.67
236 TestStartStop/group/no-preload/serial/SecondStart 121.74
237 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 9.85
238 TestStartStop/group/embed-certs/serial/SecondStart 122.15
239 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 4.16
240 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 4.47
241 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 7.34
242 TestStartStop/group/old-k8s-version/serial/Pause 11.6
243 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 4.26
244 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 4.41
245 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 7.44
246 TestStartStop/group/no-preload/serial/Pause 11.74
248 TestStartStop/group/default-k8s-different-port/serial/FirstStart 86.09
249 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 4.13
250 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 4.45
251 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 7.46
253 TestStartStop/group/newest-cni/serial/FirstStart 85.89
254 TestStartStop/group/embed-certs/serial/Pause 11.65
255 TestNetworkPlugins/group/auto/Start 81.39
256 TestNetworkPlugins/group/false/Start 81.18
257 TestStartStop/group/default-k8s-different-port/serial/DeployApp 8.53
258 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 7.38
261 TestStartStop/group/default-k8s-different-port/serial/Stop 27.12
262 TestStartStop/group/newest-cni/serial/Stop 27.06
263 TestNetworkPlugins/group/cilium/Start 81.76
264 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 10.29
265 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 10.38
266 TestStartStop/group/default-k8s-different-port/serial/SecondStart 122.56
267 TestNetworkPlugins/group/calico/Start 81.9
268 TestStartStop/group/newest-cni/serial/SecondStart 122.84
269 TestNetworkPlugins/group/custom-weave/Start 81.97
270 TestNetworkPlugins/group/enable-default-cni/Start 82.36
271 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 4.06
274 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 7.29
275 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 4.34
276 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 7.57
277 TestStartStop/group/newest-cni/serial/Pause 11.85
278 TestStartStop/group/default-k8s-different-port/serial/Pause 11.71
279 TestNetworkPlugins/group/kindnet/Start 81.66
280 TestNetworkPlugins/group/bridge/Start 81.15
281 TestNetworkPlugins/group/kubenet/Start 81.26
x
+
TestOffline (96.23s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-20220516224650-2444 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p offline-docker-20220516224650-2444 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: exit status 60 (1m23.0418265s)

                                                
                                                
-- stdout --
	* [offline-docker-20220516224650-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node offline-docker-20220516224650-2444 in cluster offline-docker-20220516224650-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-20220516224650-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:46:50.947384    8032 out.go:296] Setting OutFile to fd 1564 ...
	I0516 22:46:51.035796    8032 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:46:51.035796    8032 out.go:309] Setting ErrFile to fd 1568...
	I0516 22:46:51.036795    8032 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:46:51.047840    8032 out.go:303] Setting JSON to false
	I0516 22:46:51.049794    8032 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4323,"bootTime":1652736888,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:46:51.049794    8032 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:46:51.088639    8032 out.go:177] * [offline-docker-20220516224650-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:46:51.093723    8032 notify.go:193] Checking for updates...
	I0516 22:46:51.097768    8032 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:46:51.104078    8032 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:46:51.112578    8032 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:46:51.118998    8032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:46:51.124734    8032 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:46:51.124734    8032 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:46:53.848463    8032 docker.go:137] docker version: linux-20.10.14
	I0516 22:46:53.856296    8032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:46:55.978703    8032 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1216379s)
	I0516 22:46:55.979496    8032 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:46:54.8861787 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:46:55.982607    8032 out.go:177] * Using the docker driver based on user configuration
	I0516 22:46:55.985410    8032 start.go:284] selected driver: docker
	I0516 22:46:55.985410    8032 start.go:806] validating driver "docker" against <nil>
	I0516 22:46:55.985410    8032 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:46:56.058864    8032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:46:58.224084    8032 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1650982s)
	I0516 22:46:58.224423    8032 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:46:57.1624627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:46:58.224748    8032 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 22:46:58.225513    8032 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 22:46:58.231340    8032 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 22:46:58.233630    8032 cni.go:95] Creating CNI manager for ""
	I0516 22:46:58.233630    8032 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 22:46:58.233630    8032 start_flags.go:306] config:
	{Name:offline-docker-20220516224650-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:offline-docker-20220516224650-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:46:58.237070    8032 out.go:177] * Starting control plane node offline-docker-20220516224650-2444 in cluster offline-docker-20220516224650-2444
	I0516 22:46:58.239475    8032 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:46:58.242385    8032 out.go:177] * Pulling base image ...
	I0516 22:46:58.244820    8032 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 22:46:58.244820    8032 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:46:58.245022    8032 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 22:46:58.245102    8032 cache.go:57] Caching tarball of preloaded images
	I0516 22:46:58.245865    8032 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 22:46:58.246098    8032 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 22:46:58.246407    8032 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\offline-docker-20220516224650-2444\config.json ...
	I0516 22:46:58.246407    8032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\offline-docker-20220516224650-2444\config.json: {Name:mk18ad6a2cc58bd32f52402df28d8d9f25a0c9d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 22:46:59.371180    8032 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:46:59.371275    8032 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:46:59.371681    8032 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:46:59.371681    8032 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:46:59.371681    8032 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:46:59.371681    8032 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:46:59.371681    8032 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:46:59.371681    8032 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:46:59.371681    8032 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:47:01.759017    8032 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:47:01.759137    8032 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:47:01.759263    8032 start.go:352] acquiring machines lock for offline-docker-20220516224650-2444: {Name:mkceaabefdb859cf0f8f457e580e289966444f65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:47:01.759263    8032 start.go:356] acquired machines lock for "offline-docker-20220516224650-2444" in 0s
	I0516 22:47:01.760068    8032 start.go:91] Provisioning new machine with config: &{Name:offline-docker-20220516224650-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:offline-docker-20220516224650-2444 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 22:47:01.760068    8032 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:47:01.763211    8032 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 22:47:01.764449    8032 start.go:165] libmachine.API.Create for "offline-docker-20220516224650-2444" (driver="docker")
	I0516 22:47:01.764729    8032 client.go:168] LocalClient.Create starting
	I0516 22:47:01.764923    8032 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:47:01.765953    8032 main.go:134] libmachine: Decoding PEM data...
	I0516 22:47:01.765953    8032 main.go:134] libmachine: Parsing certificate...
	I0516 22:47:01.765953    8032 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:47:01.766455    8032 main.go:134] libmachine: Decoding PEM data...
	I0516 22:47:01.766455    8032 main.go:134] libmachine: Parsing certificate...
	I0516 22:47:01.776991    8032 cli_runner.go:164] Run: docker network inspect offline-docker-20220516224650-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:47:03.769287    8032 cli_runner.go:211] docker network inspect offline-docker-20220516224650-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:47:03.769372    8032 cli_runner.go:217] Completed: docker network inspect offline-docker-20220516224650-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.9922039s)
	I0516 22:47:03.777273    8032 network_create.go:272] running [docker network inspect offline-docker-20220516224650-2444] to gather additional debugging logs...
	I0516 22:47:03.777273    8032 cli_runner.go:164] Run: docker network inspect offline-docker-20220516224650-2444
	W0516 22:47:04.885337    8032 cli_runner.go:211] docker network inspect offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:47:04.885337    8032 cli_runner.go:217] Completed: docker network inspect offline-docker-20220516224650-2444: (1.1080553s)
	I0516 22:47:04.885337    8032 network_create.go:275] error running [docker network inspect offline-docker-20220516224650-2444]: docker network inspect offline-docker-20220516224650-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20220516224650-2444
	I0516 22:47:04.885337    8032 network_create.go:277] output of [docker network inspect offline-docker-20220516224650-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20220516224650-2444
	
	** /stderr **
	I0516 22:47:04.895692    8032 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:47:06.024843    8032 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1289839s)
	I0516 22:47:06.049282    8032 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0003481b8] misses:0}
	I0516 22:47:06.278709    8032 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:47:06.278815    8032 network_create.go:115] attempt to create docker network offline-docker-20220516224650-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:47:06.291168    8032 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444
	W0516 22:47:07.617732    8032 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:47:07.617732    8032 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444: (1.3262994s)
	W0516 22:47:07.617732    8032 network_create.go:107] failed to create docker network offline-docker-20220516224650-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:47:07.638017    8032 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003481b8] amended:false}} dirty:map[] misses:0}
	I0516 22:47:07.638017    8032 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:47:07.661396    8032 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003481b8] amended:true}} dirty:map[192.168.49.0:0xc0003481b8 192.168.58.0:0xc000114310] misses:0}
	I0516 22:47:07.661396    8032 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:47:07.661396    8032 network_create.go:115] attempt to create docker network offline-docker-20220516224650-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:47:07.675142    8032 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444
	W0516 22:47:08.865745    8032 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:47:08.865745    8032 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444: (1.1905942s)
	W0516 22:47:08.865745    8032 network_create.go:107] failed to create docker network offline-docker-20220516224650-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:47:08.889313    8032 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003481b8] amended:true}} dirty:map[192.168.49.0:0xc0003481b8 192.168.58.0:0xc000114310] misses:1}
	I0516 22:47:08.889313    8032 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:47:08.909310    8032 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003481b8] amended:true}} dirty:map[192.168.49.0:0xc0003481b8 192.168.58.0:0xc000114310 192.168.67.0:0xc0003483b0] misses:1}
	I0516 22:47:08.909310    8032 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:47:08.909310    8032 network_create.go:115] attempt to create docker network offline-docker-20220516224650-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:47:08.919356    8032 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444
	W0516 22:47:10.164222    8032 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:47:10.164222    8032 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444: (1.2445598s)
	W0516 22:47:10.164222    8032 network_create.go:107] failed to create docker network offline-docker-20220516224650-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:47:10.184381    8032 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003481b8] amended:true}} dirty:map[192.168.49.0:0xc0003481b8 192.168.58.0:0xc000114310 192.168.67.0:0xc0003483b0] misses:2}
	I0516 22:47:10.184381    8032 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:47:10.202748    8032 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003481b8] amended:true}} dirty:map[192.168.49.0:0xc0003481b8 192.168.58.0:0xc000114310 192.168.67.0:0xc0003483b0 192.168.76.0:0xc000006580] misses:2}
	I0516 22:47:10.203200    8032 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:47:10.203200    8032 network_create.go:115] attempt to create docker network offline-docker-20220516224650-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:47:10.212883    8032 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444
	W0516 22:47:11.299290    8032 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:47:11.299412    8032 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444: (1.0863986s)
	E0516 22:47:11.299412    8032 network_create.go:104] error while trying to create docker network offline-docker-20220516224650-2444 192.168.76.0/24: create docker network offline-docker-20220516224650-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1d2134f951381d9c0bd86a3fbb139cfb92c1f1786f705c90c9281b736a681c8e (br-1d2134f95138): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:47:11.299412    8032 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network offline-docker-20220516224650-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1d2134f951381d9c0bd86a3fbb139cfb92c1f1786f705c90c9281b736a681c8e (br-1d2134f95138): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network offline-docker-20220516224650-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1d2134f951381d9c0bd86a3fbb139cfb92c1f1786f705c90c9281b736a681c8e (br-1d2134f95138): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:47:11.316330    8032 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:47:12.433363    8032 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1170242s)
	I0516 22:47:12.442712    8032 cli_runner.go:164] Run: docker volume create offline-docker-20220516224650-2444 --label name.minikube.sigs.k8s.io=offline-docker-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:47:13.514281    8032 cli_runner.go:211] docker volume create offline-docker-20220516224650-2444 --label name.minikube.sigs.k8s.io=offline-docker-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:47:13.514281    8032 cli_runner.go:217] Completed: docker volume create offline-docker-20220516224650-2444 --label name.minikube.sigs.k8s.io=offline-docker-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0715614s)
	I0516 22:47:13.514281    8032 client.go:171] LocalClient.Create took 11.7494604s
	I0516 22:47:15.531331    8032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:47:15.538863    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444
	W0516 22:47:16.660991    8032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:47:16.661047    8032 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: (1.1221198s)
	I0516 22:47:16.661047    8032 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:16.949936    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444
	W0516 22:47:18.026953    8032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:47:18.026953    8032 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: (1.0769042s)
	W0516 22:47:18.026953    8032 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	
	W0516 22:47:18.026953    8032 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:18.038253    8032 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:47:18.046312    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444
	W0516 22:47:19.093385    8032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:47:19.093424    8032 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: (1.0469385s)
	I0516 22:47:19.093753    8032 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:19.395490    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444
	W0516 22:47:20.467149    8032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:47:20.467266    8032 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: (1.0716505s)
	W0516 22:47:20.467449    8032 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	
	W0516 22:47:20.467504    8032 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:20.467548    8032 start.go:134] duration metric: createHost completed in 18.7073341s
	I0516 22:47:20.467548    8032 start.go:81] releasing machines lock for "offline-docker-20220516224650-2444", held for 18.7076043s
	W0516 22:47:20.467777    8032 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for offline-docker-20220516224650-2444 container: docker volume create offline-docker-20220516224650-2444 --label name.minikube.sigs.k8s.io=offline-docker-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220516224650-2444': mkdir /var/lib/docker/volumes/offline-docker-20220516224650-2444: read-only file system
	I0516 22:47:20.486479    8032 cli_runner.go:164] Run: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}
	W0516 22:47:21.596865    8032 cli_runner.go:211] docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:47:21.597090    8032 cli_runner.go:217] Completed: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: (1.1103776s)
	I0516 22:47:21.597218    8032 delete.go:82] Unable to get host status for offline-docker-20220516224650-2444, assuming it has already been deleted: state: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	W0516 22:47:21.597548    8032 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for offline-docker-20220516224650-2444 container: docker volume create offline-docker-20220516224650-2444 --label name.minikube.sigs.k8s.io=offline-docker-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220516224650-2444': mkdir /var/lib/docker/volumes/offline-docker-20220516224650-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for offline-docker-20220516224650-2444 container: docker volume create offline-docker-20220516224650-2444 --label name.minikube.sigs.k8s.io=offline-docker-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220516224650-2444': mkdir /var/lib/docker/volumes/offline-docker-20220516224650-2444: read-only file system
	
	I0516 22:47:21.597548    8032 start.go:623] Will try again in 5 seconds ...
	I0516 22:47:26.604356    8032 start.go:352] acquiring machines lock for offline-docker-20220516224650-2444: {Name:mkceaabefdb859cf0f8f457e580e289966444f65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:47:26.604757    8032 start.go:356] acquired machines lock for "offline-docker-20220516224650-2444" in 188.9µs
	I0516 22:47:26.605038    8032 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:47:26.605106    8032 fix.go:55] fixHost starting: 
	I0516 22:47:26.619969    8032 cli_runner.go:164] Run: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}
	W0516 22:47:27.667557    8032 cli_runner.go:211] docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:47:27.667621    8032 cli_runner.go:217] Completed: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: (1.0472719s)
	I0516 22:47:27.667621    8032 fix.go:103] recreateIfNeeded on offline-docker-20220516224650-2444: state= err=unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:27.667621    8032 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:47:27.670796    8032 out.go:177] * docker "offline-docker-20220516224650-2444" container is missing, will recreate.
	I0516 22:47:27.674212    8032 delete.go:124] DEMOLISHING offline-docker-20220516224650-2444 ...
	I0516 22:47:27.689694    8032 cli_runner.go:164] Run: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}
	W0516 22:47:28.728109    8032 cli_runner.go:211] docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:47:28.728109    8032 cli_runner.go:217] Completed: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: (1.038407s)
	W0516 22:47:28.728109    8032 stop.go:75] unable to get state: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:28.728109    8032 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:28.743115    8032 cli_runner.go:164] Run: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}
	W0516 22:47:29.826563    8032 cli_runner.go:211] docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:47:29.826563    8032 cli_runner.go:217] Completed: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: (1.0834404s)
	I0516 22:47:29.826563    8032 delete.go:82] Unable to get host status for offline-docker-20220516224650-2444, assuming it has already been deleted: state: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:29.835269    8032 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-20220516224650-2444
	W0516 22:47:30.904792    8032 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:47:30.904825    8032 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} offline-docker-20220516224650-2444: (1.0692801s)
	I0516 22:47:30.904922    8032 kic.go:356] could not find the container offline-docker-20220516224650-2444 to remove it. will try anyways
	I0516 22:47:30.913444    8032 cli_runner.go:164] Run: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}
	W0516 22:47:32.014023    8032 cli_runner.go:211] docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:47:32.014023    8032 cli_runner.go:217] Completed: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: (1.1005709s)
	W0516 22:47:32.014023    8032 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:32.024444    8032 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-20220516224650-2444 /bin/bash -c "sudo init 0"
	W0516 22:47:33.148448    8032 cli_runner.go:211] docker exec --privileged -t offline-docker-20220516224650-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:47:33.148553    8032 cli_runner.go:217] Completed: docker exec --privileged -t offline-docker-20220516224650-2444 /bin/bash -c "sudo init 0": (1.1237643s)
	I0516 22:47:33.148646    8032 oci.go:641] error shutdown offline-docker-20220516224650-2444: docker exec --privileged -t offline-docker-20220516224650-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:34.167641    8032 cli_runner.go:164] Run: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}
	W0516 22:47:35.267941    8032 cli_runner.go:211] docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:47:35.267992    8032 cli_runner.go:217] Completed: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: (1.1001924s)
	I0516 22:47:35.268093    8032 oci.go:653] temporary error verifying shutdown: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:35.268093    8032 oci.go:655] temporary error: container offline-docker-20220516224650-2444 status is  but expect it to be exited
	I0516 22:47:35.268186    8032 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:35.745315    8032 cli_runner.go:164] Run: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}
	W0516 22:47:36.817686    8032 cli_runner.go:211] docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:47:36.817686    8032 cli_runner.go:217] Completed: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: (1.0723628s)
	I0516 22:47:36.817686    8032 oci.go:653] temporary error verifying shutdown: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:36.817686    8032 oci.go:655] temporary error: container offline-docker-20220516224650-2444 status is  but expect it to be exited
	I0516 22:47:36.817686    8032 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:37.726553    8032 cli_runner.go:164] Run: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}
	W0516 22:47:38.830127    8032 cli_runner.go:211] docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:47:38.830127    8032 cli_runner.go:217] Completed: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: (1.1035298s)
	I0516 22:47:38.830127    8032 oci.go:653] temporary error verifying shutdown: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:38.830127    8032 oci.go:655] temporary error: container offline-docker-20220516224650-2444 status is  but expect it to be exited
	I0516 22:47:38.830127    8032 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:39.485510    8032 cli_runner.go:164] Run: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}
	W0516 22:47:40.586685    8032 cli_runner.go:211] docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:47:40.586998    8032 cli_runner.go:217] Completed: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: (1.1011668s)
	I0516 22:47:40.587076    8032 oci.go:653] temporary error verifying shutdown: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:40.587121    8032 oci.go:655] temporary error: container offline-docker-20220516224650-2444 status is  but expect it to be exited
	I0516 22:47:40.587181    8032 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:41.710982    8032 cli_runner.go:164] Run: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}
	W0516 22:47:42.774028    8032 cli_runner.go:211] docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:47:42.774158    8032 cli_runner.go:217] Completed: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: (1.0630371s)
	I0516 22:47:42.774216    8032 oci.go:653] temporary error verifying shutdown: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:42.774216    8032 oci.go:655] temporary error: container offline-docker-20220516224650-2444 status is  but expect it to be exited
	I0516 22:47:42.774216    8032 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:44.307093    8032 cli_runner.go:164] Run: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}
	W0516 22:47:45.385579    8032 cli_runner.go:211] docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:47:45.385579    8032 cli_runner.go:217] Completed: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: (1.0784775s)
	I0516 22:47:45.385579    8032 oci.go:653] temporary error verifying shutdown: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:45.385579    8032 oci.go:655] temporary error: container offline-docker-20220516224650-2444 status is  but expect it to be exited
	I0516 22:47:45.385579    8032 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:48.443124    8032 cli_runner.go:164] Run: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}
	W0516 22:47:49.563436    8032 cli_runner.go:211] docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:47:49.563699    8032 cli_runner.go:217] Completed: docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: (1.1195268s)
	I0516 22:47:49.563753    8032 oci.go:653] temporary error verifying shutdown: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:47:49.563817    8032 oci.go:655] temporary error: container offline-docker-20220516224650-2444 status is  but expect it to be exited
	I0516 22:47:49.563868    8032 oci.go:88] couldn't shut down offline-docker-20220516224650-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	 
	I0516 22:47:49.572453    8032 cli_runner.go:164] Run: docker rm -f -v offline-docker-20220516224650-2444
	I0516 22:47:50.726469    8032 cli_runner.go:217] Completed: docker rm -f -v offline-docker-20220516224650-2444: (1.1531292s)
	I0516 22:47:50.734516    8032 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-20220516224650-2444
	W0516 22:47:51.823398    8032 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:47:51.823398    8032 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} offline-docker-20220516224650-2444: (1.0888743s)
	I0516 22:47:51.834749    8032 cli_runner.go:164] Run: docker network inspect offline-docker-20220516224650-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:47:52.900077    8032 cli_runner.go:211] docker network inspect offline-docker-20220516224650-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:47:52.900077    8032 cli_runner.go:217] Completed: docker network inspect offline-docker-20220516224650-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0653197s)
	I0516 22:47:52.907068    8032 network_create.go:272] running [docker network inspect offline-docker-20220516224650-2444] to gather additional debugging logs...
	I0516 22:47:52.907068    8032 cli_runner.go:164] Run: docker network inspect offline-docker-20220516224650-2444
	W0516 22:47:54.021547    8032 cli_runner.go:211] docker network inspect offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:47:54.021547    8032 cli_runner.go:217] Completed: docker network inspect offline-docker-20220516224650-2444: (1.1143611s)
	I0516 22:47:54.021547    8032 network_create.go:275] error running [docker network inspect offline-docker-20220516224650-2444]: docker network inspect offline-docker-20220516224650-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20220516224650-2444
	I0516 22:47:54.021547    8032 network_create.go:277] output of [docker network inspect offline-docker-20220516224650-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20220516224650-2444
	
	** /stderr **
	W0516 22:47:54.022205    8032 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:47:54.022205    8032 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:47:55.028479    8032 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:47:55.036885    8032 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 22:47:55.036885    8032 start.go:165] libmachine.API.Create for "offline-docker-20220516224650-2444" (driver="docker")
	I0516 22:47:55.036885    8032 client.go:168] LocalClient.Create starting
	I0516 22:47:55.037811    8032 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:47:55.037811    8032 main.go:134] libmachine: Decoding PEM data...
	I0516 22:47:55.037811    8032 main.go:134] libmachine: Parsing certificate...
	I0516 22:47:55.037811    8032 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:47:55.037811    8032 main.go:134] libmachine: Decoding PEM data...
	I0516 22:47:55.037811    8032 main.go:134] libmachine: Parsing certificate...
	I0516 22:47:55.047859    8032 cli_runner.go:164] Run: docker network inspect offline-docker-20220516224650-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:47:56.233048    8032 cli_runner.go:211] docker network inspect offline-docker-20220516224650-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:47:56.233048    8032 cli_runner.go:217] Completed: docker network inspect offline-docker-20220516224650-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1851797s)
	I0516 22:47:56.242368    8032 network_create.go:272] running [docker network inspect offline-docker-20220516224650-2444] to gather additional debugging logs...
	I0516 22:47:56.242368    8032 cli_runner.go:164] Run: docker network inspect offline-docker-20220516224650-2444
	W0516 22:47:57.349338    8032 cli_runner.go:211] docker network inspect offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:47:57.349338    8032 cli_runner.go:217] Completed: docker network inspect offline-docker-20220516224650-2444: (1.1069621s)
	I0516 22:47:57.349338    8032 network_create.go:275] error running [docker network inspect offline-docker-20220516224650-2444]: docker network inspect offline-docker-20220516224650-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20220516224650-2444
	I0516 22:47:57.349338    8032 network_create.go:277] output of [docker network inspect offline-docker-20220516224650-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20220516224650-2444
	
	** /stderr **
	I0516 22:47:57.357395    8032 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:47:58.490615    8032 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1330358s)
	I0516 22:47:58.506276    8032 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003481b8] amended:true}} dirty:map[192.168.49.0:0xc0003481b8 192.168.58.0:0xc000114310 192.168.67.0:0xc0003483b0 192.168.76.0:0xc000006580] misses:2}
	I0516 22:47:58.506780    8032 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:47:58.522697    8032 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003481b8] amended:true}} dirty:map[192.168.49.0:0xc0003481b8 192.168.58.0:0xc000114310 192.168.67.0:0xc0003483b0 192.168.76.0:0xc000006580] misses:3}
	I0516 22:47:58.522697    8032 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:47:58.537082    8032 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003481b8 192.168.58.0:0xc000114310 192.168.67.0:0xc0003483b0 192.168.76.0:0xc000006580] amended:false}} dirty:map[] misses:0}
	I0516 22:47:58.537082    8032 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:47:58.554773    8032 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003481b8 192.168.58.0:0xc000114310 192.168.67.0:0xc0003483b0 192.168.76.0:0xc000006580] amended:false}} dirty:map[] misses:0}
	I0516 22:47:58.555373    8032 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:47:58.571273    8032 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003481b8 192.168.58.0:0xc000114310 192.168.67.0:0xc0003483b0 192.168.76.0:0xc000006580] amended:true}} dirty:map[192.168.49.0:0xc0003481b8 192.168.58.0:0xc000114310 192.168.67.0:0xc0003483b0 192.168.76.0:0xc000006580 192.168.85.0:0xc000114440] misses:0}
	I0516 22:47:58.571273    8032 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:47:58.571273    8032 network_create.go:115] attempt to create docker network offline-docker-20220516224650-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:47:58.578308    8032 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444
	W0516 22:47:59.626505    8032 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:47:59.626505    8032 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444: (1.0481885s)
	E0516 22:47:59.626505    8032 network_create.go:104] error while trying to create docker network offline-docker-20220516224650-2444 192.168.85.0/24: create docker network offline-docker-20220516224650-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c84d3732c65a3213a6b0d27988c3b8afd730b02526050cabe727f91a4e6dda49 (br-c84d3732c65a): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:47:59.626505    8032 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network offline-docker-20220516224650-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c84d3732c65a3213a6b0d27988c3b8afd730b02526050cabe727f91a4e6dda49 (br-c84d3732c65a): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network offline-docker-20220516224650-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c84d3732c65a3213a6b0d27988c3b8afd730b02526050cabe727f91a4e6dda49 (br-c84d3732c65a): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:47:59.640985    8032 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:48:00.733650    8032 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0926567s)
	I0516 22:48:00.742921    8032 cli_runner.go:164] Run: docker volume create offline-docker-20220516224650-2444 --label name.minikube.sigs.k8s.io=offline-docker-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:48:01.786931    8032 cli_runner.go:211] docker volume create offline-docker-20220516224650-2444 --label name.minikube.sigs.k8s.io=offline-docker-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:48:01.787094    8032 cli_runner.go:217] Completed: docker volume create offline-docker-20220516224650-2444 --label name.minikube.sigs.k8s.io=offline-docker-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0439683s)
	I0516 22:48:01.787152    8032 client.go:171] LocalClient.Create took 6.7502145s
	I0516 22:48:03.809540    8032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:48:03.817361    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444
	W0516 22:48:04.937156    8032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:48:04.937156    8032 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: (1.1196222s)
	I0516 22:48:04.937156    8032 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:48:05.278095    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444
	W0516 22:48:06.357685    8032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:48:06.357685    8032 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: (1.0795811s)
	W0516 22:48:06.357685    8032 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	
	W0516 22:48:06.357685    8032 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:48:06.367694    8032 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:48:06.375679    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444
	W0516 22:48:07.453497    8032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:48:07.453497    8032 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: (1.077809s)
	I0516 22:48:07.453497    8032 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:48:07.698118    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444
	W0516 22:48:08.762339    8032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:48:08.762339    8032 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: (1.0642133s)
	W0516 22:48:08.762339    8032 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	
	W0516 22:48:08.762339    8032 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:48:08.762339    8032 start.go:134] duration metric: createHost completed in 13.7336815s
	I0516 22:48:08.774238    8032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:48:08.780853    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444
	W0516 22:48:09.851148    8032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:48:09.851148    8032 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: (1.0700626s)
	I0516 22:48:09.851148    8032 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:48:10.113163    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444
	W0516 22:48:11.183772    8032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:48:11.183839    8032 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: (1.0705658s)
	W0516 22:48:11.183940    8032 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	
	W0516 22:48:11.183940    8032 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:48:11.195715    8032 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:48:11.202410    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444
	W0516 22:48:12.341671    8032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:48:12.341671    8032 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: (1.1392518s)
	I0516 22:48:12.342640    8032 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:48:12.555753    8032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444
	W0516 22:48:13.635397    8032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444 returned with exit code 1
	I0516 22:48:13.635397    8032 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: (1.0794819s)
	W0516 22:48:13.635397    8032 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	
	W0516 22:48:13.635397    8032 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220516224650-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220516224650-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444
	I0516 22:48:13.635397    8032 fix.go:57] fixHost completed within 47.0299225s
	I0516 22:48:13.635397    8032 start.go:81] releasing machines lock for "offline-docker-20220516224650-2444", held for 47.0302718s
	W0516 22:48:13.636392    8032 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-20220516224650-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for offline-docker-20220516224650-2444 container: docker volume create offline-docker-20220516224650-2444 --label name.minikube.sigs.k8s.io=offline-docker-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220516224650-2444': mkdir /var/lib/docker/volumes/offline-docker-20220516224650-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p offline-docker-20220516224650-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for offline-docker-20220516224650-2444 container: docker volume create offline-docker-20220516224650-2444 --label name.minikube.sigs.k8s.io=offline-docker-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220516224650-2444': mkdir /var/lib/docker/volumes/offline-docker-20220516224650-2444: read-only file system
	
	I0516 22:48:13.641322    8032 out.go:177] 
	W0516 22:48:13.643591    8032 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for offline-docker-20220516224650-2444 container: docker volume create offline-docker-20220516224650-2444 --label name.minikube.sigs.k8s.io=offline-docker-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220516224650-2444': mkdir /var/lib/docker/volumes/offline-docker-20220516224650-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for offline-docker-20220516224650-2444 container: docker volume create offline-docker-20220516224650-2444 --label name.minikube.sigs.k8s.io=offline-docker-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220516224650-2444': mkdir /var/lib/docker/volumes/offline-docker-20220516224650-2444: read-only file system
	
	W0516 22:48:13.643591    8032 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 22:48:13.643591    8032 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 22:48:13.646934    8032 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-windows-amd64.exe start -p offline-docker-20220516224650-2444 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker failed: exit status 60
panic.go:482: *** TestOffline FAILED at 2022-05-16 22:48:13.7745123 +0000 GMT m=+3161.444738101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-20220516224650-2444

                                                
                                                
=== CONT  TestOffline
helpers_test.go:231: (dbg) Non-zero exit: docker inspect offline-docker-20220516224650-2444: exit status 1 (1.1207132s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: offline-docker-20220516224650-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-20220516224650-2444 -n offline-docker-20220516224650-2444

                                                
                                                
=== CONT  TestOffline
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-20220516224650-2444 -n offline-docker-20220516224650-2444: exit status 7 (2.9442707s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:48:17.831545    8528 status.go:247] status error: host: state: unknown state "offline-docker-20220516224650-2444": docker container inspect offline-docker-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220516224650-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-20220516224650-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-20220516224650-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-20220516224650-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-20220516224650-2444: (9.0215017s)
--- FAIL: TestOffline (96.23s)

                                                
                                    
x
+
TestAddons/Setup (78.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-20220516215732-2444 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p addons-20220516215732-2444 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: exit status 60 (1m18.0081233s)

                                                
                                                
-- stdout --
	* [addons-20220516215732-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node addons-20220516215732-2444 in cluster addons-20220516215732-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "addons-20220516215732-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 21:57:32.394486    8008 out.go:296] Setting OutFile to fd 588 ...
	I0516 21:57:32.454749    8008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 21:57:32.454749    8008 out.go:309] Setting ErrFile to fd 280...
	I0516 21:57:32.454749    8008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 21:57:32.468540    8008 out.go:303] Setting JSON to false
	I0516 21:57:32.471737    8008 start.go:115] hostinfo: {"hostname":"minikube2","uptime":1364,"bootTime":1652736888,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 21:57:32.471737    8008 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 21:57:32.476939    8008 out.go:177] * [addons-20220516215732-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 21:57:32.481606    8008 notify.go:193] Checking for updates...
	I0516 21:57:32.484196    8008 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 21:57:32.486266    8008 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 21:57:32.489484    8008 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 21:57:32.492319    8008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 21:57:32.494936    8008 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 21:57:35.029078    8008 docker.go:137] docker version: linux-20.10.14
	I0516 21:57:35.036865    8008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 21:57:37.033342    8008 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9963141s)
	I0516 21:57:37.034053    8008 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-16 21:57:36.0097104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 21:57:37.039513    8008 out.go:177] * Using the docker driver based on user configuration
	I0516 21:57:37.043424    8008 start.go:284] selected driver: docker
	I0516 21:57:37.043503    8008 start.go:806] validating driver "docker" against <nil>
	I0516 21:57:37.043615    8008 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 21:57:37.115920    8008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 21:57:39.083755    8008 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9678271s)
	I0516 21:57:39.083755    8008 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-16 21:57:38.1038092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 21:57:39.083755    8008 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 21:57:39.085184    8008 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 21:57:39.088630    8008 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 21:57:39.091078    8008 cni.go:95] Creating CNI manager for ""
	I0516 21:57:39.091172    8008 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 21:57:39.091172    8008 start_flags.go:306] config:
	{Name:addons-20220516215732-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:addons-20220516215732-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 21:57:39.095212    8008 out.go:177] * Starting control plane node addons-20220516215732-2444 in cluster addons-20220516215732-2444
	I0516 21:57:39.097531    8008 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 21:57:39.099476    8008 out.go:177] * Pulling base image ...
	I0516 21:57:39.103733    8008 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 21:57:39.103733    8008 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 21:57:39.103733    8008 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 21:57:39.104091    8008 cache.go:57] Caching tarball of preloaded images
	I0516 21:57:39.104236    8008 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 21:57:39.104931    8008 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 21:57:39.105106    8008 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-20220516215732-2444\config.json ...
	I0516 21:57:39.105106    8008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-20220516215732-2444\config.json: {Name:mk14b2faf7b76fdefd48bdf6066b677d8d9a3bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 21:57:40.144099    8008 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 21:57:40.144178    8008 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 21:57:40.144446    8008 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 21:57:40.144590    8008 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 21:57:40.144725    8008 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 21:57:40.144725    8008 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 21:57:40.144725    8008 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 21:57:40.144725    8008 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 21:57:40.144725    8008 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 21:57:42.385579    8008 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 21:57:42.385579    8008 cache.go:206] Successfully downloaded all kic artifacts
	I0516 21:57:42.386304    8008 start.go:352] acquiring machines lock for addons-20220516215732-2444: {Name:mk5ca0fb00ada195b4328bfc93674790bf4ec0ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 21:57:42.386557    8008 start.go:356] acquired machines lock for "addons-20220516215732-2444" in 253.1µs
	I0516 21:57:42.386557    8008 start.go:91] Provisioning new machine with config: &{Name:addons-20220516215732-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:addons-20220516215732-2444 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 21:57:42.386557    8008 start.go:131] createHost starting for "" (driver="docker")
	I0516 21:57:42.390927    8008 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0516 21:57:42.391157    8008 start.go:165] libmachine.API.Create for "addons-20220516215732-2444" (driver="docker")
	I0516 21:57:42.391157    8008 client.go:168] LocalClient.Create starting
	I0516 21:57:42.392462    8008 main.go:134] libmachine: Creating CA: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 21:57:42.515415    8008 main.go:134] libmachine: Creating client certificate: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 21:57:42.739885    8008 cli_runner.go:164] Run: docker network inspect addons-20220516215732-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 21:57:43.753913    8008 cli_runner.go:211] docker network inspect addons-20220516215732-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 21:57:43.754043    8008 cli_runner.go:217] Completed: docker network inspect addons-20220516215732-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0140249s)
	I0516 21:57:43.762926    8008 network_create.go:272] running [docker network inspect addons-20220516215732-2444] to gather additional debugging logs...
	I0516 21:57:43.762926    8008 cli_runner.go:164] Run: docker network inspect addons-20220516215732-2444
	W0516 21:57:44.786588    8008 cli_runner.go:211] docker network inspect addons-20220516215732-2444 returned with exit code 1
	I0516 21:57:44.786779    8008 cli_runner.go:217] Completed: docker network inspect addons-20220516215732-2444: (1.0234516s)
	I0516 21:57:44.786870    8008 network_create.go:275] error running [docker network inspect addons-20220516215732-2444]: docker network inspect addons-20220516215732-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220516215732-2444
	I0516 21:57:44.786915    8008 network_create.go:277] output of [docker network inspect addons-20220516215732-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220516215732-2444
	
	** /stderr **
	I0516 21:57:44.795397    8008 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 21:57:45.803081    8008 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0074688s)
	I0516 21:57:45.824324    8008 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006b90] misses:0}
	I0516 21:57:45.824324    8008 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 21:57:45.824324    8008 network_create.go:115] attempt to create docker network addons-20220516215732-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 21:57:45.834701    8008 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444
	W0516 21:57:46.866624    8008 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444 returned with exit code 1
	I0516 21:57:46.866856    8008 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444: (1.0317884s)
	W0516 21:57:46.866987    8008 network_create.go:107] failed to create docker network addons-20220516215732-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 21:57:46.886352    8008 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006b90] amended:false}} dirty:map[] misses:0}
	I0516 21:57:46.886352    8008 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 21:57:46.904316    8008 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006b90] amended:true}} dirty:map[192.168.49.0:0xc000006b90 192.168.58.0:0xc000764270] misses:0}
	I0516 21:57:46.904316    8008 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 21:57:46.904316    8008 network_create.go:115] attempt to create docker network addons-20220516215732-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 21:57:46.912315    8008 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444
	W0516 21:57:47.927602    8008 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444 returned with exit code 1
	I0516 21:57:47.927602    8008 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444: (1.0150877s)
	W0516 21:57:47.927602    8008 network_create.go:107] failed to create docker network addons-20220516215732-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 21:57:47.944780    8008 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006b90] amended:true}} dirty:map[192.168.49.0:0xc000006b90 192.168.58.0:0xc000764270] misses:1}
	I0516 21:57:47.944780    8008 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 21:57:47.961233    8008 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006b90] amended:true}} dirty:map[192.168.49.0:0xc000006b90 192.168.58.0:0xc000764270 192.168.67.0:0xc000592528] misses:1}
	I0516 21:57:47.962369    8008 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 21:57:47.962369    8008 network_create.go:115] attempt to create docker network addons-20220516215732-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 21:57:47.970725    8008 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444
	W0516 21:57:48.992681    8008 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444 returned with exit code 1
	I0516 21:57:48.992866    8008 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444: (1.0219527s)
	W0516 21:57:48.992866    8008 network_create.go:107] failed to create docker network addons-20220516215732-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 21:57:49.010581    8008 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006b90] amended:true}} dirty:map[192.168.49.0:0xc000006b90 192.168.58.0:0xc000764270 192.168.67.0:0xc000592528] misses:2}
	I0516 21:57:49.010581    8008 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 21:57:49.030335    8008 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006b90] amended:true}} dirty:map[192.168.49.0:0xc000006b90 192.168.58.0:0xc000764270 192.168.67.0:0xc000592528 192.168.76.0:0xc000764308] misses:2}
	I0516 21:57:49.030335    8008 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 21:57:49.030335    8008 network_create.go:115] attempt to create docker network addons-20220516215732-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 21:57:49.038540    8008 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444
	W0516 21:57:50.129060    8008 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444 returned with exit code 1
	I0516 21:57:50.129060    8008 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444: (1.0904019s)
	E0516 21:57:50.129060    8008 network_create.go:104] error while trying to create docker network addons-20220516215732-2444 192.168.76.0/24: create docker network addons-20220516215732-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	W0516 21:57:50.129060    8008 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network addons-20220516215732-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network addons-20220516215732-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	
	I0516 21:57:50.145973    8008 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 21:57:51.152247    8008 cli_runner.go:164] Run: docker volume create addons-20220516215732-2444 --label name.minikube.sigs.k8s.io=addons-20220516215732-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 21:57:52.206706    8008 cli_runner.go:211] docker volume create addons-20220516215732-2444 --label name.minikube.sigs.k8s.io=addons-20220516215732-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 21:57:52.207106    8008 cli_runner.go:217] Completed: docker volume create addons-20220516215732-2444 --label name.minikube.sigs.k8s.io=addons-20220516215732-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0544559s)
	I0516 21:57:52.207303    8008 client.go:171] LocalClient.Create took 9.8160134s
	I0516 21:57:54.232787    8008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 21:57:54.240452    8008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444
	W0516 21:57:55.294641    8008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444 returned with exit code 1
	I0516 21:57:55.294641    8008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: (1.0541852s)
	I0516 21:57:55.294641    8008 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:57:55.583258    8008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444
	W0516 21:57:56.594177    8008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444 returned with exit code 1
	I0516 21:57:56.594177    8008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: (1.0109158s)
	W0516 21:57:56.594177    8008 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	
	W0516 21:57:56.594177    8008 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:57:56.605379    8008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 21:57:56.612614    8008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444
	W0516 21:57:57.658008    8008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444 returned with exit code 1
	I0516 21:57:57.658008    8008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: (1.0452132s)
	I0516 21:57:57.658008    8008 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:57:57.959584    8008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444
	W0516 21:57:59.001955    8008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444 returned with exit code 1
	I0516 21:57:59.001955    8008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: (1.0423672s)
	W0516 21:57:59.001955    8008 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	
	W0516 21:57:59.001955    8008 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:57:59.001955    8008 start.go:134] duration metric: createHost completed in 16.6153386s
	I0516 21:57:59.001955    8008 start.go:81] releasing machines lock for "addons-20220516215732-2444", held for 16.6153386s
	W0516 21:57:59.001955    8008 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for addons-20220516215732-2444 container: docker volume create addons-20220516215732-2444 --label name.minikube.sigs.k8s.io=addons-20220516215732-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220516215732-2444: error while creating volume root path '/var/lib/docker/volumes/addons-20220516215732-2444': mkdir /var/lib/docker/volumes/addons-20220516215732-2444: read-only file system
	I0516 21:57:59.018355    8008 cli_runner.go:164] Run: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}
	W0516 21:58:00.069181    8008 cli_runner.go:211] docker container inspect addons-20220516215732-2444 --format={{.State.Status}} returned with exit code 1
	I0516 21:58:00.069181    8008 cli_runner.go:217] Completed: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: (1.0508226s)
	I0516 21:58:00.069181    8008 delete.go:82] Unable to get host status for addons-20220516215732-2444, assuming it has already been deleted: state: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	W0516 21:58:00.069181    8008 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for addons-20220516215732-2444 container: docker volume create addons-20220516215732-2444 --label name.minikube.sigs.k8s.io=addons-20220516215732-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220516215732-2444: error while creating volume root path '/var/lib/docker/volumes/addons-20220516215732-2444': mkdir /var/lib/docker/volumes/addons-20220516215732-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for addons-20220516215732-2444 container: docker volume create addons-20220516215732-2444 --label name.minikube.sigs.k8s.io=addons-20220516215732-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220516215732-2444: error while creating volume root path '/var/lib/docker/volumes/addons-20220516215732-2444': mkdir /var/lib/docker/volumes/addons-20220516215732-2444: read-only file system
	
	I0516 21:58:00.069181    8008 start.go:623] Will try again in 5 seconds ...
	I0516 21:58:05.082429    8008 start.go:352] acquiring machines lock for addons-20220516215732-2444: {Name:mk5ca0fb00ada195b4328bfc93674790bf4ec0ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 21:58:05.082429    8008 start.go:356] acquired machines lock for "addons-20220516215732-2444" in 0s
	I0516 21:58:05.082429    8008 start.go:94] Skipping create...Using existing machine configuration
	I0516 21:58:05.082999    8008 fix.go:55] fixHost starting: 
	I0516 21:58:05.102421    8008 cli_runner.go:164] Run: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}
	W0516 21:58:06.116291    8008 cli_runner.go:211] docker container inspect addons-20220516215732-2444 --format={{.State.Status}} returned with exit code 1
	I0516 21:58:06.116338    8008 cli_runner.go:217] Completed: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: (1.0136152s)
	I0516 21:58:06.116583    8008 fix.go:103] recreateIfNeeded on addons-20220516215732-2444: state= err=unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:06.116583    8008 fix.go:108] machineExists: false. err=machine does not exist
	I0516 21:58:06.122425    8008 out.go:177] * docker "addons-20220516215732-2444" container is missing, will recreate.
	I0516 21:58:06.126782    8008 delete.go:124] DEMOLISHING addons-20220516215732-2444 ...
	I0516 21:58:06.143800    8008 cli_runner.go:164] Run: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}
	W0516 21:58:07.151092    8008 cli_runner.go:211] docker container inspect addons-20220516215732-2444 --format={{.State.Status}} returned with exit code 1
	I0516 21:58:07.151092    8008 cli_runner.go:217] Completed: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: (1.0072881s)
	W0516 21:58:07.151092    8008 stop.go:75] unable to get state: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:07.151092    8008 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:07.167180    8008 cli_runner.go:164] Run: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}
	W0516 21:58:08.197291    8008 cli_runner.go:211] docker container inspect addons-20220516215732-2444 --format={{.State.Status}} returned with exit code 1
	I0516 21:58:08.197291    8008 cli_runner.go:217] Completed: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: (1.0301074s)
	I0516 21:58:08.197291    8008 delete.go:82] Unable to get host status for addons-20220516215732-2444, assuming it has already been deleted: state: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:08.206909    8008 cli_runner.go:164] Run: docker container inspect -f {{.Id}} addons-20220516215732-2444
	W0516 21:58:09.244999    8008 cli_runner.go:211] docker container inspect -f {{.Id}} addons-20220516215732-2444 returned with exit code 1
	I0516 21:58:09.244999    8008 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} addons-20220516215732-2444: (1.0380864s)
	I0516 21:58:09.244999    8008 kic.go:356] could not find the container addons-20220516215732-2444 to remove it. will try anyways
	I0516 21:58:09.253778    8008 cli_runner.go:164] Run: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}
	W0516 21:58:10.279471    8008 cli_runner.go:211] docker container inspect addons-20220516215732-2444 --format={{.State.Status}} returned with exit code 1
	I0516 21:58:10.279560    8008 cli_runner.go:217] Completed: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: (1.0254585s)
	W0516 21:58:10.279652    8008 oci.go:84] error getting container status, will try to delete anyways: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:10.288391    8008 cli_runner.go:164] Run: docker exec --privileged -t addons-20220516215732-2444 /bin/bash -c "sudo init 0"
	W0516 21:58:11.296048    8008 cli_runner.go:211] docker exec --privileged -t addons-20220516215732-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 21:58:11.296341    8008 cli_runner.go:217] Completed: docker exec --privileged -t addons-20220516215732-2444 /bin/bash -c "sudo init 0": (1.0076529s)
	I0516 21:58:11.296341    8008 oci.go:641] error shutdown addons-20220516215732-2444: docker exec --privileged -t addons-20220516215732-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:12.321653    8008 cli_runner.go:164] Run: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}
	W0516 21:58:13.364083    8008 cli_runner.go:211] docker container inspect addons-20220516215732-2444 --format={{.State.Status}} returned with exit code 1
	I0516 21:58:13.364129    8008 cli_runner.go:217] Completed: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: (1.0422132s)
	I0516 21:58:13.364243    8008 oci.go:653] temporary error verifying shutdown: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:13.364273    8008 oci.go:655] temporary error: container addons-20220516215732-2444 status is  but expect it to be exited
	I0516 21:58:13.364343    8008 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:13.839590    8008 cli_runner.go:164] Run: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}
	W0516 21:58:14.833127    8008 cli_runner.go:211] docker container inspect addons-20220516215732-2444 --format={{.State.Status}} returned with exit code 1
	I0516 21:58:14.833127    8008 oci.go:653] temporary error verifying shutdown: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:14.833127    8008 oci.go:655] temporary error: container addons-20220516215732-2444 status is  but expect it to be exited
	I0516 21:58:14.833127    8008 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:15.741525    8008 cli_runner.go:164] Run: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}
	W0516 21:58:16.764764    8008 cli_runner.go:211] docker container inspect addons-20220516215732-2444 --format={{.State.Status}} returned with exit code 1
	I0516 21:58:16.764764    8008 cli_runner.go:217] Completed: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: (1.0232351s)
	I0516 21:58:16.764764    8008 oci.go:653] temporary error verifying shutdown: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:16.764764    8008 oci.go:655] temporary error: container addons-20220516215732-2444 status is  but expect it to be exited
	I0516 21:58:16.764764    8008 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:17.425545    8008 cli_runner.go:164] Run: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}
	W0516 21:58:18.436850    8008 cli_runner.go:211] docker container inspect addons-20220516215732-2444 --format={{.State.Status}} returned with exit code 1
	I0516 21:58:18.436850    8008 cli_runner.go:217] Completed: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: (1.0113008s)
	I0516 21:58:18.436850    8008 oci.go:653] temporary error verifying shutdown: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:18.436850    8008 oci.go:655] temporary error: container addons-20220516215732-2444 status is  but expect it to be exited
	I0516 21:58:18.436850    8008 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:19.566595    8008 cli_runner.go:164] Run: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}
	W0516 21:58:20.594712    8008 cli_runner.go:211] docker container inspect addons-20220516215732-2444 --format={{.State.Status}} returned with exit code 1
	I0516 21:58:20.594779    8008 cli_runner.go:217] Completed: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: (1.0280549s)
	I0516 21:58:20.594779    8008 oci.go:653] temporary error verifying shutdown: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:20.594779    8008 oci.go:655] temporary error: container addons-20220516215732-2444 status is  but expect it to be exited
	I0516 21:58:20.594779    8008 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:22.129380    8008 cli_runner.go:164] Run: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}
	W0516 21:58:23.150765    8008 cli_runner.go:211] docker container inspect addons-20220516215732-2444 --format={{.State.Status}} returned with exit code 1
	I0516 21:58:23.151071    8008 cli_runner.go:217] Completed: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: (1.0213819s)
	I0516 21:58:23.151071    8008 oci.go:653] temporary error verifying shutdown: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:23.151071    8008 oci.go:655] temporary error: container addons-20220516215732-2444 status is  but expect it to be exited
	I0516 21:58:23.151071    8008 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:26.215644    8008 cli_runner.go:164] Run: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}
	W0516 21:58:27.223666    8008 cli_runner.go:211] docker container inspect addons-20220516215732-2444 --format={{.State.Status}} returned with exit code 1
	I0516 21:58:27.223666    8008 cli_runner.go:217] Completed: docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: (1.0080183s)
	I0516 21:58:27.223666    8008 oci.go:653] temporary error verifying shutdown: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:27.223666    8008 oci.go:655] temporary error: container addons-20220516215732-2444 status is  but expect it to be exited
	I0516 21:58:27.223666    8008 oci.go:88] couldn't shut down addons-20220516215732-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "addons-20220516215732-2444": docker container inspect addons-20220516215732-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	 
	I0516 21:58:27.232764    8008 cli_runner.go:164] Run: docker rm -f -v addons-20220516215732-2444
	I0516 21:58:28.238029    8008 cli_runner.go:217] Completed: docker rm -f -v addons-20220516215732-2444: (1.0052612s)
	I0516 21:58:28.247707    8008 cli_runner.go:164] Run: docker container inspect -f {{.Id}} addons-20220516215732-2444
	W0516 21:58:29.284989    8008 cli_runner.go:211] docker container inspect -f {{.Id}} addons-20220516215732-2444 returned with exit code 1
	I0516 21:58:29.284989    8008 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} addons-20220516215732-2444: (1.037278s)
	I0516 21:58:29.293524    8008 cli_runner.go:164] Run: docker network inspect addons-20220516215732-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 21:58:30.330700    8008 cli_runner.go:211] docker network inspect addons-20220516215732-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 21:58:30.330843    8008 cli_runner.go:217] Completed: docker network inspect addons-20220516215732-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0370062s)
	I0516 21:58:30.339428    8008 network_create.go:272] running [docker network inspect addons-20220516215732-2444] to gather additional debugging logs...
	I0516 21:58:30.339428    8008 cli_runner.go:164] Run: docker network inspect addons-20220516215732-2444
	W0516 21:58:31.377299    8008 cli_runner.go:211] docker network inspect addons-20220516215732-2444 returned with exit code 1
	I0516 21:58:31.377299    8008 cli_runner.go:217] Completed: docker network inspect addons-20220516215732-2444: (1.037867s)
	I0516 21:58:31.377299    8008 network_create.go:275] error running [docker network inspect addons-20220516215732-2444]: docker network inspect addons-20220516215732-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220516215732-2444
	I0516 21:58:31.377299    8008 network_create.go:277] output of [docker network inspect addons-20220516215732-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220516215732-2444
	
	** /stderr **
	W0516 21:58:31.378756    8008 delete.go:139] delete failed (probably ok) <nil>
	I0516 21:58:31.378756    8008 fix.go:115] Sleeping 1 second for extra luck!
	I0516 21:58:32.383714    8008 start.go:131] createHost starting for "" (driver="docker")
	I0516 21:58:32.389081    8008 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0516 21:58:32.389606    8008 start.go:165] libmachine.API.Create for "addons-20220516215732-2444" (driver="docker")
	I0516 21:58:32.389606    8008 client.go:168] LocalClient.Create starting
	I0516 21:58:32.390393    8008 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 21:58:32.390636    8008 main.go:134] libmachine: Decoding PEM data...
	I0516 21:58:32.390667    8008 main.go:134] libmachine: Parsing certificate...
	I0516 21:58:32.390957    8008 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 21:58:32.391140    8008 main.go:134] libmachine: Decoding PEM data...
	I0516 21:58:32.391171    8008 main.go:134] libmachine: Parsing certificate...
	I0516 21:58:32.402124    8008 cli_runner.go:164] Run: docker network inspect addons-20220516215732-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 21:58:33.436211    8008 cli_runner.go:211] docker network inspect addons-20220516215732-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 21:58:33.436211    8008 cli_runner.go:217] Completed: docker network inspect addons-20220516215732-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0340826s)
	I0516 21:58:33.445282    8008 network_create.go:272] running [docker network inspect addons-20220516215732-2444] to gather additional debugging logs...
	I0516 21:58:33.445282    8008 cli_runner.go:164] Run: docker network inspect addons-20220516215732-2444
	W0516 21:58:34.464593    8008 cli_runner.go:211] docker network inspect addons-20220516215732-2444 returned with exit code 1
	I0516 21:58:34.464741    8008 cli_runner.go:217] Completed: docker network inspect addons-20220516215732-2444: (1.0193074s)
	I0516 21:58:34.464741    8008 network_create.go:275] error running [docker network inspect addons-20220516215732-2444]: docker network inspect addons-20220516215732-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220516215732-2444
	I0516 21:58:34.464790    8008 network_create.go:277] output of [docker network inspect addons-20220516215732-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220516215732-2444
	
	** /stderr **
	I0516 21:58:34.472899    8008 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 21:58:35.494150    8008 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0212472s)
	I0516 21:58:35.510361    8008 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006b90] amended:true}} dirty:map[192.168.49.0:0xc000006b90 192.168.58.0:0xc000764270 192.168.67.0:0xc000592528 192.168.76.0:0xc000764308] misses:2}
	I0516 21:58:35.510361    8008 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 21:58:35.523476    8008 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006b90] amended:true}} dirty:map[192.168.49.0:0xc000006b90 192.168.58.0:0xc000764270 192.168.67.0:0xc000592528 192.168.76.0:0xc000764308] misses:3}
	I0516 21:58:35.523476    8008 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 21:58:35.538592    8008 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006b90 192.168.58.0:0xc000764270 192.168.67.0:0xc000592528 192.168.76.0:0xc000764308] amended:false}} dirty:map[] misses:0}
	I0516 21:58:35.538592    8008 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 21:58:35.545508    8008 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006b90 192.168.58.0:0xc000764270 192.168.67.0:0xc000592528 192.168.76.0:0xc000764308] amended:false}} dirty:map[] misses:0}
	I0516 21:58:35.545508    8008 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 21:58:35.567632    8008 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006b90 192.168.58.0:0xc000764270 192.168.67.0:0xc000592528 192.168.76.0:0xc000764308] amended:true}} dirty:map[192.168.49.0:0xc000006b90 192.168.58.0:0xc000764270 192.168.67.0:0xc000592528 192.168.76.0:0xc000764308 192.168.85.0:0xc000006e58] misses:0}
	I0516 21:58:35.568605    8008 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 21:58:35.568777    8008 network_create.go:115] attempt to create docker network addons-20220516215732-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 21:58:35.576441    8008 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444
	W0516 21:58:36.676293    8008 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444 returned with exit code 1
	I0516 21:58:36.676293    8008 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444: (1.098787s)
	E0516 21:58:36.676293    8008 network_create.go:104] error while trying to create docker network addons-20220516215732-2444 192.168.85.0/24: create docker network addons-20220516215732-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	W0516 21:58:36.676293    8008 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network addons-20220516215732-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network addons-20220516215732-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220516215732-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	
	I0516 21:58:36.691300    8008 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 21:58:37.760076    8008 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0686468s)
	I0516 21:58:37.768962    8008 cli_runner.go:164] Run: docker volume create addons-20220516215732-2444 --label name.minikube.sigs.k8s.io=addons-20220516215732-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 21:58:38.778295    8008 cli_runner.go:211] docker volume create addons-20220516215732-2444 --label name.minikube.sigs.k8s.io=addons-20220516215732-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 21:58:38.778295    8008 cli_runner.go:217] Completed: docker volume create addons-20220516215732-2444 --label name.minikube.sigs.k8s.io=addons-20220516215732-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0093295s)
	I0516 21:58:38.778295    8008 client.go:171] LocalClient.Create took 6.3886654s
	I0516 21:58:40.792252    8008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 21:58:40.799948    8008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444
	W0516 21:58:41.811975    8008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444 returned with exit code 1
	I0516 21:58:41.811975    8008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: (1.0117948s)
	I0516 21:58:41.812178    8008 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:42.154133    8008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444
	W0516 21:58:43.197552    8008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444 returned with exit code 1
	I0516 21:58:43.197724    8008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: (1.0433661s)
	W0516 21:58:43.197924    8008 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	
	W0516 21:58:43.197972    8008 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:43.208905    8008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 21:58:43.214993    8008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444
	W0516 21:58:44.231785    8008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444 returned with exit code 1
	I0516 21:58:44.231785    8008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: (1.0167888s)
	I0516 21:58:44.231785    8008 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:44.468057    8008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444
	W0516 21:58:45.491792    8008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444 returned with exit code 1
	I0516 21:58:45.491792    8008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: (1.0237312s)
	W0516 21:58:45.491792    8008 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	
	W0516 21:58:45.491792    8008 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:45.491792    8008 start.go:134] duration metric: createHost completed in 13.1080295s
	I0516 21:58:45.505110    8008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 21:58:45.513341    8008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444
	W0516 21:58:46.571883    8008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444 returned with exit code 1
	I0516 21:58:46.572056    8008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: (1.0584s)
	I0516 21:58:46.572251    8008 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:46.825975    8008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444
	W0516 21:58:47.875168    8008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444 returned with exit code 1
	I0516 21:58:47.875168    8008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: (1.0491894s)
	W0516 21:58:47.875168    8008 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	
	W0516 21:58:47.875168    8008 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:47.888066    8008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 21:58:47.895164    8008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444
	W0516 21:58:48.907087    8008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444 returned with exit code 1
	I0516 21:58:48.907153    8008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: (1.0107352s)
	I0516 21:58:48.907338    8008 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:49.119288    8008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444
	W0516 21:58:50.145039    8008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444 returned with exit code 1
	I0516 21:58:50.145039    8008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: (1.0257471s)
	W0516 21:58:50.145039    8008 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	
	W0516 21:58:50.145039    8008 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220516215732-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220516215732-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220516215732-2444
	I0516 21:58:50.145039    8008 fix.go:57] fixHost completed within 45.061875s
	I0516 21:58:50.145039    8008 start.go:81] releasing machines lock for "addons-20220516215732-2444", held for 45.0624447s
	W0516 21:58:50.145877    8008 out.go:239] * Failed to start docker container. Running "minikube delete -p addons-20220516215732-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for addons-20220516215732-2444 container: docker volume create addons-20220516215732-2444 --label name.minikube.sigs.k8s.io=addons-20220516215732-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220516215732-2444: error while creating volume root path '/var/lib/docker/volumes/addons-20220516215732-2444': mkdir /var/lib/docker/volumes/addons-20220516215732-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p addons-20220516215732-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for addons-20220516215732-2444 container: docker volume create addons-20220516215732-2444 --label name.minikube.sigs.k8s.io=addons-20220516215732-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220516215732-2444: error while creating volume root path '/var/lib/docker/volumes/addons-20220516215732-2444': mkdir /var/lib/docker/volumes/addons-20220516215732-2444: read-only file system
	
	I0516 21:58:50.150818    8008 out.go:177] 
	W0516 21:58:50.152947    8008 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for addons-20220516215732-2444 container: docker volume create addons-20220516215732-2444 --label name.minikube.sigs.k8s.io=addons-20220516215732-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220516215732-2444: error while creating volume root path '/var/lib/docker/volumes/addons-20220516215732-2444': mkdir /var/lib/docker/volumes/addons-20220516215732-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for addons-20220516215732-2444 container: docker volume create addons-20220516215732-2444 --label name.minikube.sigs.k8s.io=addons-20220516215732-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220516215732-2444: error while creating volume root path '/var/lib/docker/volumes/addons-20220516215732-2444': mkdir /var/lib/docker/volumes/addons-20220516215732-2444: read-only file system
	
	W0516 21:58:50.152947    8008 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 21:58:50.152947    8008 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 21:58:50.155393    8008 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:77: out/minikube-windows-amd64.exe start -p addons-20220516215732-2444 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: exit status 60
--- FAIL: TestAddons/Setup (78.10s)

                                                
                                    
x
+
TestCertOptions (101.63s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-20220516225447-2444 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-options-20220516225447-2444 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: exit status 60 (1m21.5962139s)

                                                
                                                
-- stdout --
	* [cert-options-20220516225447-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node cert-options-20220516225447-2444 in cluster cert-options-20220516225447-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-options-20220516225447-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:55:05.889197    6152 network_create.go:104] error while trying to create docker network cert-options-20220516225447-2444 192.168.76.0/24: create docker network cert-options-20220516225447-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220516225447-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network de814db10f498ddfa502c69dc626b2c330e1c3f4268a614454e62bf5ef08293e (br-de814db10f49): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-options-20220516225447-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220516225447-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network de814db10f498ddfa502c69dc626b2c330e1c3f4268a614454e62bf5ef08293e (br-de814db10f49): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for cert-options-20220516225447-2444 container: docker volume create cert-options-20220516225447-2444 --label name.minikube.sigs.k8s.io=cert-options-20220516225447-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-options-20220516225447-2444: error while creating volume root path '/var/lib/docker/volumes/cert-options-20220516225447-2444': mkdir /var/lib/docker/volumes/cert-options-20220516225447-2444: read-only file system
	
	E0516 22:55:54.609412    6152 network_create.go:104] error while trying to create docker network cert-options-20220516225447-2444 192.168.85.0/24: create docker network cert-options-20220516225447-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220516225447-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6f6394092e27ef8512b9c6b6bacb8d3d82568dce1b284ecb9efad41ea0d339dd (br-6f6394092e27): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-options-20220516225447-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220516225447-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6f6394092e27ef8512b9c6b6bacb8d3d82568dce1b284ecb9efad41ea0d339dd (br-6f6394092e27): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p cert-options-20220516225447-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cert-options-20220516225447-2444 container: docker volume create cert-options-20220516225447-2444 --label name.minikube.sigs.k8s.io=cert-options-20220516225447-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-options-20220516225447-2444: error while creating volume root path '/var/lib/docker/volumes/cert-options-20220516225447-2444': mkdir /var/lib/docker/volumes/cert-options-20220516225447-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cert-options-20220516225447-2444 container: docker volume create cert-options-20220516225447-2444 --label name.minikube.sigs.k8s.io=cert-options-20220516225447-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-options-20220516225447-2444: error while creating volume root path '/var/lib/docker/volumes/cert-options-20220516225447-2444': mkdir /var/lib/docker/volumes/cert-options-20220516225447-2444: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p cert-options-20220516225447-2444 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost" : exit status 60
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20220516225447-2444 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p cert-options-20220516225447-2444 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 80 (3.2326968s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20220516225447-2444": docker container inspect cert-options-20220516225447-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20220516225447-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_7b8531d53ef9e7bbc6fc851111559258d7d600b6_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-windows-amd64.exe -p cert-options-20220516225447-2444 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 80
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:82: failed to inspect container for the port get port 8555 for "cert-options-20220516225447-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-20220516225447-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: cert-options-20220516225447-2444
cert_options_test.go:85: expected to get a non-zero forwarded port but got 0
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-20220516225447-2444 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p cert-options-20220516225447-2444 -- "sudo cat /etc/kubernetes/admin.conf": exit status 80 (3.2042624s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20220516225447-2444": docker container inspect cert-options-20220516225447-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20220516225447-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_bf4b0acc5ddf49539e7b1dcbc83bd1916f9eb405_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-windows-amd64.exe ssh -p cert-options-20220516225447-2444 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 80
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not containe the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20220516225447-2444": docker container inspect cert-options-20220516225447-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20220516225447-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_bf4b0acc5ddf49539e7b1dcbc83bd1916f9eb405_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2022-05-16 22:56:16.388509 +0000 GMT m=+3644.054776901
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-options-20220516225447-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect cert-options-20220516225447-2444: exit status 1 (1.2125705s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: cert-options-20220516225447-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20220516225447-2444 -n cert-options-20220516225447-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20220516225447-2444 -n cert-options-20220516225447-2444: exit status 7 (2.8907901s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:56:20.471085    6896 status.go:247] status error: host: state: unknown state "cert-options-20220516225447-2444": docker container inspect cert-options-20220516225447-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20220516225447-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-20220516225447-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "cert-options-20220516225447-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-20220516225447-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-20220516225447-2444: (8.3679059s)
--- FAIL: TestCertOptions (101.63s)

                                                
                                    
x
+
TestCertExpiration (392.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220516225440-2444 --memory=2048 --cert-expiration=3m --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-expiration-20220516225440-2444 --memory=2048 --cert-expiration=3m --driver=docker: exit status 60 (1m21.3298519s)

                                                
                                                
-- stdout --
	* [cert-expiration-20220516225440-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node cert-expiration-20220516225440-2444 in cluster cert-expiration-20220516225440-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20220516225440-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:54:58.520069    8304 network_create.go:104] error while trying to create docker network cert-expiration-20220516225440-2444 192.168.76.0/24: create docker network cert-expiration-20220516225440-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220516225440-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ecdf8ac5fa210a364b26af6f3f574bc15df1ff8755bc674bf5587119c62eb3a8 (br-ecdf8ac5fa21): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220516225440-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220516225440-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ecdf8ac5fa210a364b26af6f3f574bc15df1ff8755bc674bf5587119c62eb3a8 (br-ecdf8ac5fa21): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220516225440-2444 container: docker volume create cert-expiration-20220516225440-2444 --label name.minikube.sigs.k8s.io=cert-expiration-20220516225440-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220516225440-2444: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220516225440-2444': mkdir /var/lib/docker/volumes/cert-expiration-20220516225440-2444: read-only file system
	
	E0516 22:55:47.023778    8304 network_create.go:104] error while trying to create docker network cert-expiration-20220516225440-2444 192.168.85.0/24: create docker network cert-expiration-20220516225440-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220516225440-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d072282224058a8cae8364a3cffd9198c836b9e136fde57f9e227f72f84cc577 (br-d07228222405): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220516225440-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220516225440-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d072282224058a8cae8364a3cffd9198c836b9e136fde57f9e227f72f84cc577 (br-d07228222405): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20220516225440-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220516225440-2444 container: docker volume create cert-expiration-20220516225440-2444 --label name.minikube.sigs.k8s.io=cert-expiration-20220516225440-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220516225440-2444: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220516225440-2444': mkdir /var/lib/docker/volumes/cert-expiration-20220516225440-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220516225440-2444 container: docker volume create cert-expiration-20220516225440-2444 --label name.minikube.sigs.k8s.io=cert-expiration-20220516225440-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220516225440-2444: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220516225440-2444': mkdir /var/lib/docker/volumes/cert-expiration-20220516225440-2444: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p cert-expiration-20220516225440-2444 --memory=2048 --cert-expiration=3m --driver=docker" : exit status 60

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220516225440-2444 --memory=2048 --cert-expiration=8760h --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-expiration-20220516225440-2444 --memory=2048 --cert-expiration=8760h --driver=docker: exit status 60 (1m58.3159332s)

                                                
                                                
-- stdout --
	* [cert-expiration-20220516225440-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-20220516225440-2444 in cluster cert-expiration-20220516225440-2444
	* Pulling base image ...
	* docker "cert-expiration-20220516225440-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20220516225440-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:59:52.150668    8628 network_create.go:104] error while trying to create docker network cert-expiration-20220516225440-2444 192.168.76.0/24: create docker network cert-expiration-20220516225440-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220516225440-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0dfc4c48dc8125a6be8cd4735b8624f0d79498ef6bd89055823aaed96b95e0ca (br-0dfc4c48dc81): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220516225440-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220516225440-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0dfc4c48dc8125a6be8cd4735b8624f0d79498ef6bd89055823aaed96b95e0ca (br-0dfc4c48dc81): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220516225440-2444 container: docker volume create cert-expiration-20220516225440-2444 --label name.minikube.sigs.k8s.io=cert-expiration-20220516225440-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220516225440-2444: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220516225440-2444': mkdir /var/lib/docker/volumes/cert-expiration-20220516225440-2444: read-only file system
	
	E0516 23:00:45.214296    8628 network_create.go:104] error while trying to create docker network cert-expiration-20220516225440-2444 192.168.85.0/24: create docker network cert-expiration-20220516225440-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220516225440-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 900183963d764a90691d942ef31a99c7d77eb5b6e8622610785c041a9c91fda6 (br-900183963d76): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220516225440-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220516225440-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 900183963d764a90691d942ef31a99c7d77eb5b6e8622610785c041a9c91fda6 (br-900183963d76): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20220516225440-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220516225440-2444 container: docker volume create cert-expiration-20220516225440-2444 --label name.minikube.sigs.k8s.io=cert-expiration-20220516225440-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220516225440-2444: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220516225440-2444': mkdir /var/lib/docker/volumes/cert-expiration-20220516225440-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220516225440-2444 container: docker volume create cert-expiration-20220516225440-2444 --label name.minikube.sigs.k8s.io=cert-expiration-20220516225440-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220516225440-2444: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220516225440-2444': mkdir /var/lib/docker/volumes/cert-expiration-20220516225440-2444: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-windows-amd64.exe start -p cert-expiration-20220516225440-2444 --memory=2048 --cert-expiration=8760h --driver=docker" : exit status 60
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-20220516225440-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-20220516225440-2444 in cluster cert-expiration-20220516225440-2444
	* Pulling base image ...
	* docker "cert-expiration-20220516225440-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20220516225440-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:59:52.150668    8628 network_create.go:104] error while trying to create docker network cert-expiration-20220516225440-2444 192.168.76.0/24: create docker network cert-expiration-20220516225440-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220516225440-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0dfc4c48dc8125a6be8cd4735b8624f0d79498ef6bd89055823aaed96b95e0ca (br-0dfc4c48dc81): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220516225440-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220516225440-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0dfc4c48dc8125a6be8cd4735b8624f0d79498ef6bd89055823aaed96b95e0ca (br-0dfc4c48dc81): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220516225440-2444 container: docker volume create cert-expiration-20220516225440-2444 --label name.minikube.sigs.k8s.io=cert-expiration-20220516225440-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220516225440-2444: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220516225440-2444': mkdir /var/lib/docker/volumes/cert-expiration-20220516225440-2444: read-only file system
	
	E0516 23:00:45.214296    8628 network_create.go:104] error while trying to create docker network cert-expiration-20220516225440-2444 192.168.85.0/24: create docker network cert-expiration-20220516225440-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220516225440-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 900183963d764a90691d942ef31a99c7d77eb5b6e8622610785c041a9c91fda6 (br-900183963d76): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220516225440-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220516225440-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 900183963d764a90691d942ef31a99c7d77eb5b6e8622610785c041a9c91fda6 (br-900183963d76): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20220516225440-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220516225440-2444 container: docker volume create cert-expiration-20220516225440-2444 --label name.minikube.sigs.k8s.io=cert-expiration-20220516225440-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220516225440-2444: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220516225440-2444': mkdir /var/lib/docker/volumes/cert-expiration-20220516225440-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220516225440-2444 container: docker volume create cert-expiration-20220516225440-2444 --label name.minikube.sigs.k8s.io=cert-expiration-20220516225440-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220516225440-2444: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220516225440-2444': mkdir /var/lib/docker/volumes/cert-expiration-20220516225440-2444: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2022-05-16 23:00:59.6650101 +0000 GMT m=+3927.328867101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertExpiration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-expiration-20220516225440-2444

                                                
                                                
=== CONT  TestCertExpiration
helpers_test.go:231: (dbg) Non-zero exit: docker inspect cert-expiration-20220516225440-2444: exit status 1 (1.198341s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: cert-expiration-20220516225440-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-20220516225440-2444 -n cert-expiration-20220516225440-2444

                                                
                                                
=== CONT  TestCertExpiration
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-20220516225440-2444 -n cert-expiration-20220516225440-2444: exit status 7 (3.017004s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:01:03.872414    4436 status.go:247] status error: host: state: unknown state "cert-expiration-20220516225440-2444": docker container inspect cert-expiration-20220516225440-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-expiration-20220516225440-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-20220516225440-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "cert-expiration-20220516225440-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-20220516225440-2444

                                                
                                                
=== CONT  TestCertExpiration
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-20220516225440-2444: (8.4987215s)
--- FAIL: TestCertExpiration (392.38s)

                                                
                                    
x
+
TestDockerFlags (100.86s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-20220516225417-2444 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p docker-flags-20220516225417-2444 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: exit status 60 (1m21.3137239s)

                                                
                                                
-- stdout --
	* [docker-flags-20220516225417-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node docker-flags-20220516225417-2444 in cluster docker-flags-20220516225417-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-20220516225417-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:54:17.251861    8708 out.go:296] Setting OutFile to fd 1792 ...
	I0516 22:54:17.310351    8708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:54:17.310351    8708 out.go:309] Setting ErrFile to fd 1844...
	I0516 22:54:17.310351    8708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:54:17.322521    8708 out.go:303] Setting JSON to false
	I0516 22:54:17.326036    8708 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4769,"bootTime":1652736888,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:54:17.326587    8708 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:54:17.331118    8708 out.go:177] * [docker-flags-20220516225417-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:54:17.341124    8708 notify.go:193] Checking for updates...
	I0516 22:54:17.344124    8708 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:54:17.346116    8708 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:54:17.348126    8708 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:54:17.351146    8708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:54:17.354126    8708 config.go:178] Loaded profile config "force-systemd-env-20220516225309-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:54:17.354126    8708 config.go:178] Loaded profile config "kubernetes-upgrade-20220516225336-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0516 22:54:17.355126    8708 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:54:17.355126    8708 config.go:178] Loaded profile config "running-upgrade-20220516224826-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0516 22:54:17.355126    8708 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:54:20.014577    8708 docker.go:137] docker version: linux-20.10.14
	I0516 22:54:20.023471    8708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:54:22.118055    8708 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0945665s)
	I0516 22:54:22.118055    8708 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:54:21.0407719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:54:22.122059    8708 out.go:177] * Using the docker driver based on user configuration
	I0516 22:54:22.125063    8708 start.go:284] selected driver: docker
	I0516 22:54:22.125063    8708 start.go:806] validating driver "docker" against <nil>
	I0516 22:54:22.126038    8708 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:54:22.203874    8708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:54:24.330668    8708 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1267761s)
	I0516 22:54:24.330668    8708 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:54:23.2631462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:54:24.331321    8708 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 22:54:24.331956    8708 start_flags.go:842] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0516 22:54:24.336363    8708 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 22:54:24.340083    8708 cni.go:95] Creating CNI manager for ""
	I0516 22:54:24.340083    8708 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 22:54:24.340083    8708 start_flags.go:306] config:
	{Name:docker-flags-20220516225417-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:docker-flags-20220516225417-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:54:24.345404    8708 out.go:177] * Starting control plane node docker-flags-20220516225417-2444 in cluster docker-flags-20220516225417-2444
	I0516 22:54:24.350225    8708 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:54:24.353173    8708 out.go:177] * Pulling base image ...
	I0516 22:54:24.356530    8708 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 22:54:24.356530    8708 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:54:24.356530    8708 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 22:54:24.356530    8708 cache.go:57] Caching tarball of preloaded images
	I0516 22:54:24.357526    8708 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 22:54:24.357526    8708 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 22:54:24.357526    8708 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\docker-flags-20220516225417-2444\config.json ...
	I0516 22:54:24.357526    8708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\docker-flags-20220516225417-2444\config.json: {Name:mke485867f55dff4a4e7d1c1f4612d988254e782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 22:54:25.449460    8708 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:54:25.449536    8708 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:54:25.449898    8708 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:54:25.449935    8708 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:54:25.450045    8708 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:54:25.450091    8708 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:54:25.450332    8708 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:54:25.450332    8708 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:54:25.450419    8708 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:54:27.779086    8708 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-587465475: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-587465475: read-only file system"}
	I0516 22:54:27.779135    8708 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:54:27.779135    8708 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:54:27.779312    8708 start.go:352] acquiring machines lock for docker-flags-20220516225417-2444: {Name:mkfd2f8f222259e2ca6424640be0d83c827bc486 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:54:27.779566    8708 start.go:356] acquired machines lock for "docker-flags-20220516225417-2444" in 196.2µs
	I0516 22:54:27.779566    8708 start.go:91] Provisioning new machine with config: &{Name:docker-flags-20220516225417-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:docker-flags-20220516225417-2444 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 22:54:27.779566    8708 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:54:27.786875    8708 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 22:54:27.786875    8708 start.go:165] libmachine.API.Create for "docker-flags-20220516225417-2444" (driver="docker")
	I0516 22:54:27.786875    8708 client.go:168] LocalClient.Create starting
	I0516 22:54:27.787917    8708 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:54:27.787917    8708 main.go:134] libmachine: Decoding PEM data...
	I0516 22:54:27.787917    8708 main.go:134] libmachine: Parsing certificate...
	I0516 22:54:27.787917    8708 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:54:27.787917    8708 main.go:134] libmachine: Decoding PEM data...
	I0516 22:54:27.787917    8708 main.go:134] libmachine: Parsing certificate...
	I0516 22:54:27.798503    8708 cli_runner.go:164] Run: docker network inspect docker-flags-20220516225417-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:54:28.899400    8708 cli_runner.go:211] docker network inspect docker-flags-20220516225417-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:54:28.899400    8708 cli_runner.go:217] Completed: docker network inspect docker-flags-20220516225417-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1007569s)
	I0516 22:54:28.907845    8708 network_create.go:272] running [docker network inspect docker-flags-20220516225417-2444] to gather additional debugging logs...
	I0516 22:54:28.907845    8708 cli_runner.go:164] Run: docker network inspect docker-flags-20220516225417-2444
	W0516 22:54:30.009731    8708 cli_runner.go:211] docker network inspect docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:54:30.009731    8708 cli_runner.go:217] Completed: docker network inspect docker-flags-20220516225417-2444: (1.1018769s)
	I0516 22:54:30.009731    8708 network_create.go:275] error running [docker network inspect docker-flags-20220516225417-2444]: docker network inspect docker-flags-20220516225417-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20220516225417-2444
	I0516 22:54:30.009731    8708 network_create.go:277] output of [docker network inspect docker-flags-20220516225417-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20220516225417-2444
	
	** /stderr **
	I0516 22:54:30.017737    8708 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:54:31.153756    8708 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1360097s)
	I0516 22:54:31.175751    8708 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00011c4c8] misses:0}
	I0516 22:54:31.175751    8708 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:31.175751    8708 network_create.go:115] attempt to create docker network docker-flags-20220516225417-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:54:31.184752    8708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444
	W0516 22:54:32.286181    8708 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:54:32.286181    8708 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444: (1.1012023s)
	W0516 22:54:32.286181    8708 network_create.go:107] failed to create docker network docker-flags-20220516225417-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:54:32.308725    8708 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c4c8] amended:false}} dirty:map[] misses:0}
	I0516 22:54:32.308725    8708 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:32.342290    8708 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c4c8] amended:true}} dirty:map[192.168.49.0:0xc00011c4c8 192.168.58.0:0xc000606230] misses:0}
	I0516 22:54:32.343232    8708 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:32.343232    8708 network_create.go:115] attempt to create docker network docker-flags-20220516225417-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:54:32.350144    8708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444
	W0516 22:54:33.435108    8708 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:54:33.435108    8708 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444: (1.0849556s)
	W0516 22:54:33.435108    8708 network_create.go:107] failed to create docker network docker-flags-20220516225417-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:54:33.455828    8708 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c4c8] amended:true}} dirty:map[192.168.49.0:0xc00011c4c8 192.168.58.0:0xc000606230] misses:1}
	I0516 22:54:33.455828    8708 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:33.473495    8708 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c4c8] amended:true}} dirty:map[192.168.49.0:0xc00011c4c8 192.168.58.0:0xc000606230 192.168.67.0:0xc00011c620] misses:1}
	I0516 22:54:33.474491    8708 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:33.474491    8708 network_create.go:115] attempt to create docker network docker-flags-20220516225417-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:54:33.482225    8708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444
	W0516 22:54:34.625220    8708 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:54:34.625220    8708 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444: (1.1429862s)
	W0516 22:54:34.625220    8708 network_create.go:107] failed to create docker network docker-flags-20220516225417-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:54:34.643201    8708 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c4c8] amended:true}} dirty:map[192.168.49.0:0xc00011c4c8 192.168.58.0:0xc000606230 192.168.67.0:0xc00011c620] misses:2}
	I0516 22:54:34.643201    8708 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:34.660131    8708 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c4c8] amended:true}} dirty:map[192.168.49.0:0xc00011c4c8 192.168.58.0:0xc000606230 192.168.67.0:0xc00011c620 192.168.76.0:0xc00011c7b8] misses:2}
	I0516 22:54:34.660131    8708 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:34.660131    8708 network_create.go:115] attempt to create docker network docker-flags-20220516225417-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:54:34.667131    8708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444
	W0516 22:54:35.770766    8708 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:54:35.770766    8708 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444: (1.1036255s)
	E0516 22:54:35.770766    8708 network_create.go:104] error while trying to create docker network docker-flags-20220516225417-2444 192.168.76.0/24: create docker network docker-flags-20220516225417-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a806f40f3e848f9ed207cd824af90814c629bc0364de25b8765dc2631afa03c7 (br-a806f40f3e84): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:54:35.770766    8708 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network docker-flags-20220516225417-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a806f40f3e848f9ed207cd824af90814c629bc0364de25b8765dc2631afa03c7 (br-a806f40f3e84): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network docker-flags-20220516225417-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a806f40f3e848f9ed207cd824af90814c629bc0364de25b8765dc2631afa03c7 (br-a806f40f3e84): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:54:35.785773    8708 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:54:36.873786    8708 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0878635s)
	I0516 22:54:36.882986    8708 cli_runner.go:164] Run: docker volume create docker-flags-20220516225417-2444 --label name.minikube.sigs.k8s.io=docker-flags-20220516225417-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:54:37.986330    8708 cli_runner.go:211] docker volume create docker-flags-20220516225417-2444 --label name.minikube.sigs.k8s.io=docker-flags-20220516225417-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:54:37.986330    8708 cli_runner.go:217] Completed: docker volume create docker-flags-20220516225417-2444 --label name.minikube.sigs.k8s.io=docker-flags-20220516225417-2444 --label created_by.minikube.sigs.k8s.io=true: (1.1033348s)
	I0516 22:54:37.986330    8708 client.go:171] LocalClient.Create took 10.1993689s
	I0516 22:54:39.997594    8708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:54:40.006529    8708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444
	W0516 22:54:41.095495    8708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:54:41.095495    8708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: (1.0888956s)
	I0516 22:54:41.095495    8708 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:54:41.393197    8708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444
	W0516 22:54:42.504070    8708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:54:42.504070    8708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: (1.1108634s)
	W0516 22:54:42.504070    8708 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	
	W0516 22:54:42.504070    8708 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:54:42.515481    8708 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:54:42.523529    8708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444
	W0516 22:54:43.671849    8708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:54:43.671988    8708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: (1.1482317s)
	I0516 22:54:43.671988    8708 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:54:43.977392    8708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444
	W0516 22:54:45.035209    8708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:54:45.035209    8708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: (1.0578077s)
	W0516 22:54:45.035209    8708 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	
	W0516 22:54:45.035209    8708 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:54:45.035209    8708 start.go:134] duration metric: createHost completed in 17.2554977s
	I0516 22:54:45.035209    8708 start.go:81] releasing machines lock for "docker-flags-20220516225417-2444", held for 17.2554977s
	W0516 22:54:45.035209    8708 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for docker-flags-20220516225417-2444 container: docker volume create docker-flags-20220516225417-2444 --label name.minikube.sigs.k8s.io=docker-flags-20220516225417-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220516225417-2444: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220516225417-2444': mkdir /var/lib/docker/volumes/docker-flags-20220516225417-2444: read-only file system
	I0516 22:54:45.052207    8708 cli_runner.go:164] Run: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}
	W0516 22:54:46.142362    8708 cli_runner.go:211] docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:46.142362    8708 cli_runner.go:217] Completed: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: (1.0901463s)
	I0516 22:54:46.142362    8708 delete.go:82] Unable to get host status for docker-flags-20220516225417-2444, assuming it has already been deleted: state: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	W0516 22:54:46.142362    8708 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for docker-flags-20220516225417-2444 container: docker volume create docker-flags-20220516225417-2444 --label name.minikube.sigs.k8s.io=docker-flags-20220516225417-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220516225417-2444: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220516225417-2444': mkdir /var/lib/docker/volumes/docker-flags-20220516225417-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for docker-flags-20220516225417-2444 container: docker volume create docker-flags-20220516225417-2444 --label name.minikube.sigs.k8s.io=docker-flags-20220516225417-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220516225417-2444: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220516225417-2444': mkdir /var/lib/docker/volumes/docker-flags-20220516225417-2444: read-only file system
	
	I0516 22:54:46.142362    8708 start.go:623] Will try again in 5 seconds ...
	I0516 22:54:51.148984    8708 start.go:352] acquiring machines lock for docker-flags-20220516225417-2444: {Name:mkfd2f8f222259e2ca6424640be0d83c827bc486 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:54:51.148984    8708 start.go:356] acquired machines lock for "docker-flags-20220516225417-2444" in 0s
	I0516 22:54:51.148984    8708 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:54:51.148984    8708 fix.go:55] fixHost starting: 
	I0516 22:54:51.166983    8708 cli_runner.go:164] Run: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}
	W0516 22:54:52.225964    8708 cli_runner.go:211] docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:52.225964    8708 cli_runner.go:217] Completed: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: (1.058973s)
	I0516 22:54:52.225964    8708 fix.go:103] recreateIfNeeded on docker-flags-20220516225417-2444: state= err=unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:54:52.225964    8708 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:54:52.229931    8708 out.go:177] * docker "docker-flags-20220516225417-2444" container is missing, will recreate.
	I0516 22:54:52.232921    8708 delete.go:124] DEMOLISHING docker-flags-20220516225417-2444 ...
	I0516 22:54:52.247919    8708 cli_runner.go:164] Run: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}
	W0516 22:54:53.373643    8708 cli_runner.go:211] docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:53.373643    8708 cli_runner.go:217] Completed: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: (1.1255166s)
	W0516 22:54:53.373643    8708 stop.go:75] unable to get state: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:54:53.373643    8708 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:54:53.390653    8708 cli_runner.go:164] Run: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}
	W0516 22:54:54.488396    8708 cli_runner.go:211] docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:54.488396    8708 cli_runner.go:217] Completed: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: (1.0977338s)
	I0516 22:54:54.488396    8708 delete.go:82] Unable to get host status for docker-flags-20220516225417-2444, assuming it has already been deleted: state: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:54:54.495420    8708 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-20220516225417-2444
	W0516 22:54:55.576199    8708 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:54:55.576199    8708 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} docker-flags-20220516225417-2444: (1.0807701s)
	I0516 22:54:55.576199    8708 kic.go:356] could not find the container docker-flags-20220516225417-2444 to remove it. will try anyways
	I0516 22:54:55.583196    8708 cli_runner.go:164] Run: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}
	W0516 22:54:56.641518    8708 cli_runner.go:211] docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:56.641518    8708 cli_runner.go:217] Completed: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: (1.0583135s)
	W0516 22:54:56.641518    8708 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:54:56.649532    8708 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-20220516225417-2444 /bin/bash -c "sudo init 0"
	W0516 22:54:57.762621    8708 cli_runner.go:211] docker exec --privileged -t docker-flags-20220516225417-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:54:57.762857    8708 cli_runner.go:217] Completed: docker exec --privileged -t docker-flags-20220516225417-2444 /bin/bash -c "sudo init 0": (1.1128473s)
	I0516 22:54:57.762929    8708 oci.go:641] error shutdown docker-flags-20220516225417-2444: docker exec --privileged -t docker-flags-20220516225417-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:54:58.782752    8708 cli_runner.go:164] Run: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}
	W0516 22:54:59.843889    8708 cli_runner.go:211] docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:59.843889    8708 cli_runner.go:217] Completed: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: (1.061128s)
	I0516 22:54:59.844468    8708 oci.go:653] temporary error verifying shutdown: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:54:59.844517    8708 oci.go:655] temporary error: container docker-flags-20220516225417-2444 status is  but expect it to be exited
	I0516 22:54:59.844517    8708 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:00.325195    8708 cli_runner.go:164] Run: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}
	W0516 22:55:01.424884    8708 cli_runner.go:211] docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:55:01.424884    8708 cli_runner.go:217] Completed: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: (1.0996802s)
	I0516 22:55:01.424884    8708 oci.go:653] temporary error verifying shutdown: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:01.424884    8708 oci.go:655] temporary error: container docker-flags-20220516225417-2444 status is  but expect it to be exited
	I0516 22:55:01.424884    8708 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:02.327480    8708 cli_runner.go:164] Run: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}
	W0516 22:55:03.430664    8708 cli_runner.go:211] docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:55:03.430664    8708 cli_runner.go:217] Completed: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: (1.0904371s)
	I0516 22:55:03.430664    8708 oci.go:653] temporary error verifying shutdown: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:03.430664    8708 oci.go:655] temporary error: container docker-flags-20220516225417-2444 status is  but expect it to be exited
	I0516 22:55:03.430664    8708 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:04.076179    8708 cli_runner.go:164] Run: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}
	W0516 22:55:05.166247    8708 cli_runner.go:211] docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:55:05.166429    8708 cli_runner.go:217] Completed: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: (1.0899099s)
	I0516 22:55:05.166429    8708 oci.go:653] temporary error verifying shutdown: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:05.166429    8708 oci.go:655] temporary error: container docker-flags-20220516225417-2444 status is  but expect it to be exited
	I0516 22:55:05.166429    8708 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:06.298172    8708 cli_runner.go:164] Run: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}
	W0516 22:55:07.377800    8708 cli_runner.go:211] docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:55:07.377800    8708 cli_runner.go:217] Completed: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: (1.0793758s)
	I0516 22:55:07.377800    8708 oci.go:653] temporary error verifying shutdown: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:07.377800    8708 oci.go:655] temporary error: container docker-flags-20220516225417-2444 status is  but expect it to be exited
	I0516 22:55:07.377800    8708 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:08.912883    8708 cli_runner.go:164] Run: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}
	W0516 22:55:09.952153    8708 cli_runner.go:211] docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:55:09.952264    8708 cli_runner.go:217] Completed: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: (1.0389135s)
	I0516 22:55:09.952346    8708 oci.go:653] temporary error verifying shutdown: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:09.952434    8708 oci.go:655] temporary error: container docker-flags-20220516225417-2444 status is  but expect it to be exited
	I0516 22:55:09.952434    8708 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:13.006404    8708 cli_runner.go:164] Run: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}
	W0516 22:55:14.083615    8708 cli_runner.go:211] docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:55:14.083653    8708 cli_runner.go:217] Completed: docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: (1.077032s)
	I0516 22:55:14.083770    8708 oci.go:653] temporary error verifying shutdown: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:14.083770    8708 oci.go:655] temporary error: container docker-flags-20220516225417-2444 status is  but expect it to be exited
	I0516 22:55:14.083770    8708 oci.go:88] couldn't shut down docker-flags-20220516225417-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	 
	I0516 22:55:14.092688    8708 cli_runner.go:164] Run: docker rm -f -v docker-flags-20220516225417-2444
	I0516 22:55:15.184083    8708 cli_runner.go:217] Completed: docker rm -f -v docker-flags-20220516225417-2444: (1.0913857s)
	I0516 22:55:15.191892    8708 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-20220516225417-2444
	W0516 22:55:16.290851    8708 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:55:16.290851    8708 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} docker-flags-20220516225417-2444: (1.0989505s)
	I0516 22:55:16.297425    8708 cli_runner.go:164] Run: docker network inspect docker-flags-20220516225417-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:55:17.341139    8708 cli_runner.go:211] docker network inspect docker-flags-20220516225417-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:55:17.341139    8708 cli_runner.go:217] Completed: docker network inspect docker-flags-20220516225417-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0435717s)
	I0516 22:55:17.349460    8708 network_create.go:272] running [docker network inspect docker-flags-20220516225417-2444] to gather additional debugging logs...
	I0516 22:55:17.349460    8708 cli_runner.go:164] Run: docker network inspect docker-flags-20220516225417-2444
	W0516 22:55:18.397872    8708 cli_runner.go:211] docker network inspect docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:55:18.397872    8708 cli_runner.go:217] Completed: docker network inspect docker-flags-20220516225417-2444: (1.0483379s)
	I0516 22:55:18.397872    8708 network_create.go:275] error running [docker network inspect docker-flags-20220516225417-2444]: docker network inspect docker-flags-20220516225417-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20220516225417-2444
	I0516 22:55:18.397872    8708 network_create.go:277] output of [docker network inspect docker-flags-20220516225417-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20220516225417-2444
	
	** /stderr **
	W0516 22:55:18.399334    8708 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:55:18.399334    8708 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:55:19.413846    8708 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:55:19.419824    8708 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 22:55:19.420581    8708 start.go:165] libmachine.API.Create for "docker-flags-20220516225417-2444" (driver="docker")
	I0516 22:55:19.420581    8708 client.go:168] LocalClient.Create starting
	I0516 22:55:19.421188    8708 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:55:19.421188    8708 main.go:134] libmachine: Decoding PEM data...
	I0516 22:55:19.421188    8708 main.go:134] libmachine: Parsing certificate...
	I0516 22:55:19.421706    8708 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:55:19.421909    8708 main.go:134] libmachine: Decoding PEM data...
	I0516 22:55:19.421909    8708 main.go:134] libmachine: Parsing certificate...
	I0516 22:55:19.430330    8708 cli_runner.go:164] Run: docker network inspect docker-flags-20220516225417-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:55:20.541187    8708 cli_runner.go:211] docker network inspect docker-flags-20220516225417-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:55:20.541187    8708 cli_runner.go:217] Completed: docker network inspect docker-flags-20220516225417-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1108474s)
	I0516 22:55:20.550194    8708 network_create.go:272] running [docker network inspect docker-flags-20220516225417-2444] to gather additional debugging logs...
	I0516 22:55:20.550194    8708 cli_runner.go:164] Run: docker network inspect docker-flags-20220516225417-2444
	W0516 22:55:21.654599    8708 cli_runner.go:211] docker network inspect docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:55:21.654599    8708 cli_runner.go:217] Completed: docker network inspect docker-flags-20220516225417-2444: (1.1043961s)
	I0516 22:55:21.654599    8708 network_create.go:275] error running [docker network inspect docker-flags-20220516225417-2444]: docker network inspect docker-flags-20220516225417-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20220516225417-2444
	I0516 22:55:21.654599    8708 network_create.go:277] output of [docker network inspect docker-flags-20220516225417-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20220516225417-2444
	
	** /stderr **
	I0516 22:55:21.663134    8708 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:55:22.795528    8708 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1322516s)
	I0516 22:55:22.812247    8708 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c4c8] amended:true}} dirty:map[192.168.49.0:0xc00011c4c8 192.168.58.0:0xc000606230 192.168.67.0:0xc00011c620 192.168.76.0:0xc00011c7b8] misses:2}
	I0516 22:55:22.812247    8708 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:55:22.828657    8708 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c4c8] amended:true}} dirty:map[192.168.49.0:0xc00011c4c8 192.168.58.0:0xc000606230 192.168.67.0:0xc00011c620 192.168.76.0:0xc00011c7b8] misses:3}
	I0516 22:55:22.828657    8708 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:55:22.843746    8708 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c4c8 192.168.58.0:0xc000606230 192.168.67.0:0xc00011c620 192.168.76.0:0xc00011c7b8] amended:false}} dirty:map[] misses:0}
	I0516 22:55:22.843746    8708 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:55:22.857305    8708 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c4c8 192.168.58.0:0xc000606230 192.168.67.0:0xc00011c620 192.168.76.0:0xc00011c7b8] amended:false}} dirty:map[] misses:0}
	I0516 22:55:22.857305    8708 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:55:22.872287    8708 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011c4c8 192.168.58.0:0xc000606230 192.168.67.0:0xc00011c620 192.168.76.0:0xc00011c7b8] amended:true}} dirty:map[192.168.49.0:0xc00011c4c8 192.168.58.0:0xc000606230 192.168.67.0:0xc00011c620 192.168.76.0:0xc00011c7b8 192.168.85.0:0xc000006b50] misses:0}
	I0516 22:55:22.872287    8708 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:55:22.872287    8708 network_create.go:115] attempt to create docker network docker-flags-20220516225417-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:55:22.883345    8708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444
	W0516 22:55:23.991656    8708 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:55:23.991656    8708 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444: (1.108302s)
	E0516 22:55:23.991656    8708 network_create.go:104] error while trying to create docker network docker-flags-20220516225417-2444 192.168.85.0/24: create docker network docker-flags-20220516225417-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1848acf11be41b91a27e33094534b52d2a9d0fee88c71d095fdb7dc1f5cf9b9e (br-1848acf11be4): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:55:23.991656    8708 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network docker-flags-20220516225417-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1848acf11be41b91a27e33094534b52d2a9d0fee88c71d095fdb7dc1f5cf9b9e (br-1848acf11be4): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network docker-flags-20220516225417-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1848acf11be41b91a27e33094534b52d2a9d0fee88c71d095fdb7dc1f5cf9b9e (br-1848acf11be4): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:55:24.009639    8708 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:55:25.109964    8708 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1003153s)
	I0516 22:55:25.116961    8708 cli_runner.go:164] Run: docker volume create docker-flags-20220516225417-2444 --label name.minikube.sigs.k8s.io=docker-flags-20220516225417-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:55:26.198699    8708 cli_runner.go:211] docker volume create docker-flags-20220516225417-2444 --label name.minikube.sigs.k8s.io=docker-flags-20220516225417-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:55:26.198699    8708 cli_runner.go:217] Completed: docker volume create docker-flags-20220516225417-2444 --label name.minikube.sigs.k8s.io=docker-flags-20220516225417-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0817291s)
	I0516 22:55:26.198699    8708 client.go:171] LocalClient.Create took 6.7780619s
	I0516 22:55:28.214581    8708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:55:28.232085    8708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444
	W0516 22:55:29.330242    8708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:55:29.330242    8708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: (1.0981484s)
	I0516 22:55:29.330242    8708 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:29.672900    8708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444
	W0516 22:55:30.809075    8708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:55:30.809219    8708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: (1.1361195s)
	W0516 22:55:30.809399    8708 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	
	W0516 22:55:30.809494    8708 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:30.824161    8708 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:55:30.832588    8708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444
	W0516 22:55:31.945370    8708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:55:31.945416    8708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: (1.112714s)
	I0516 22:55:31.945668    8708 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:32.177120    8708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444
	W0516 22:55:33.296182    8708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:55:33.296223    8708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: (1.118943s)
	W0516 22:55:33.296541    8708 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	
	W0516 22:55:33.296605    8708 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:33.296643    8708 start.go:134] duration metric: createHost completed in 13.8826805s
	I0516 22:55:33.307904    8708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:55:33.315909    8708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444
	W0516 22:55:34.408656    8708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:55:34.408762    8708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: (1.0926656s)
	I0516 22:55:34.408938    8708 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:34.674231    8708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444
	W0516 22:55:35.734038    8708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:55:35.734038    8708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: (1.0597987s)
	W0516 22:55:35.734038    8708 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	
	W0516 22:55:35.734038    8708 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:35.745787    8708 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:55:35.752982    8708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444
	W0516 22:55:36.855988    8708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:55:36.856190    8708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: (1.102932s)
	I0516 22:55:36.856379    8708 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:37.069161    8708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444
	W0516 22:55:38.276314    8708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444 returned with exit code 1
	I0516 22:55:38.276361    8708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: (1.2069685s)
	W0516 22:55:38.276651    8708 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	
	W0516 22:55:38.276702    8708 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220516225417-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220516225417-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	I0516 22:55:38.276702    8708 fix.go:57] fixHost completed within 47.1273227s
	I0516 22:55:38.276702    8708 start.go:81] releasing machines lock for "docker-flags-20220516225417-2444", held for 47.1273227s
	W0516 22:55:38.277374    8708 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-20220516225417-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for docker-flags-20220516225417-2444 container: docker volume create docker-flags-20220516225417-2444 --label name.minikube.sigs.k8s.io=docker-flags-20220516225417-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220516225417-2444: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220516225417-2444': mkdir /var/lib/docker/volumes/docker-flags-20220516225417-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p docker-flags-20220516225417-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for docker-flags-20220516225417-2444 container: docker volume create docker-flags-20220516225417-2444 --label name.minikube.sigs.k8s.io=docker-flags-20220516225417-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220516225417-2444: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220516225417-2444': mkdir /var/lib/docker/volumes/docker-flags-20220516225417-2444: read-only file system
	
	I0516 22:55:38.283441    8708 out.go:177] 
	W0516 22:55:38.285336    8708 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for docker-flags-20220516225417-2444 container: docker volume create docker-flags-20220516225417-2444 --label name.minikube.sigs.k8s.io=docker-flags-20220516225417-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220516225417-2444: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220516225417-2444': mkdir /var/lib/docker/volumes/docker-flags-20220516225417-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for docker-flags-20220516225417-2444 container: docker volume create docker-flags-20220516225417-2444 --label name.minikube.sigs.k8s.io=docker-flags-20220516225417-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220516225417-2444: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220516225417-2444': mkdir /var/lib/docker/volumes/docker-flags-20220516225417-2444: read-only file system
	
	W0516 22:55:38.286117    8708 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 22:55:38.286195    8708 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 22:55:38.288473    8708 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:47: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p docker-flags-20220516225417-2444 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker" : exit status 60
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220516225417-2444 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p docker-flags-20220516225417-2444 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (3.2628589s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_d4f85ee29175a4f8b67ccfa3331e6e8264cb6e77_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:52: failed to 'systemctl show docker' inside minikube. args "out/minikube-windows-amd64.exe -p docker-flags-20220516225417-2444 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:57: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:57: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220516225417-2444 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p docker-flags-20220516225417-2444 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (3.2728031s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_e7205990054f4366ee7f5bb530c13b1f3df973dc_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:63: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-windows-amd64.exe -p docker-flags-20220516225417-2444 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:67: expected "out/minikube-windows-amd64.exe -p docker-flags-20220516225417-2444 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:482: *** TestDockerFlags FAILED at 2022-05-16 22:55:44.9393823 +0000 GMT m=+3612.605917701
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-20220516225417-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect docker-flags-20220516225417-2444: exit status 1 (1.2186812s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: docker-flags-20220516225417-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p docker-flags-20220516225417-2444 -n docker-flags-20220516225417-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p docker-flags-20220516225417-2444 -n docker-flags-20220516225417-2444: exit status 7 (3.0133148s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:55:49.155493    7220 status.go:247] status error: host: state: unknown state "docker-flags-20220516225417-2444": docker container inspect docker-flags-20220516225417-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220516225417-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-20220516225417-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-20220516225417-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-20220516225417-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-20220516225417-2444: (8.6846267s)
--- FAIL: TestDockerFlags (100.86s)

                                                
                                    
x
+
TestForceSystemdFlag (98.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-20220516225238-2444 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-flag-20220516225238-2444 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: exit status 60 (1m22.4981184s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-20220516225238-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node force-systemd-flag-20220516225238-2444 in cluster force-systemd-flag-20220516225238-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-20220516225238-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:52:38.811389    8316 out.go:296] Setting OutFile to fd 1392 ...
	I0516 22:52:38.867090    8316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:52:38.867090    8316 out.go:309] Setting ErrFile to fd 1748...
	I0516 22:52:38.867090    8316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:52:38.877735    8316 out.go:303] Setting JSON to false
	I0516 22:52:38.879572    8316 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4671,"bootTime":1652736887,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:52:38.879572    8316 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:52:38.884203    8316 out.go:177] * [force-systemd-flag-20220516225238-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:52:38.887746    8316 notify.go:193] Checking for updates...
	I0516 22:52:38.893983    8316 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:52:38.896575    8316 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:52:38.899112    8316 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:52:38.901199    8316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:52:38.905603    8316 config.go:178] Loaded profile config "missing-upgrade-20220516224650-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0516 22:52:38.905714    8316 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:52:38.905714    8316 config.go:178] Loaded profile config "pause-20220516225202-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:52:38.906748    8316 config.go:178] Loaded profile config "running-upgrade-20220516224826-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0516 22:52:38.906827    8316 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:52:41.734655    8316 docker.go:137] docker version: linux-20.10.14
	I0516 22:52:41.743976    8316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:52:43.858962    8316 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1149125s)
	I0516 22:52:43.859589    8316 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:52:42.7604095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:52:43.863523    8316 out.go:177] * Using the docker driver based on user configuration
	I0516 22:52:43.866711    8316 start.go:284] selected driver: docker
	I0516 22:52:43.866711    8316 start.go:806] validating driver "docker" against <nil>
	I0516 22:52:43.866711    8316 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:52:44.451865    8316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:52:46.591955    8316 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1400724s)
	I0516 22:52:46.591955    8316 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:52:45.4970002 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:52:46.592488    8316 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 22:52:46.593380    8316 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0516 22:52:46.614722    8316 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 22:52:46.617589    8316 cni.go:95] Creating CNI manager for ""
	I0516 22:52:46.617589    8316 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 22:52:46.617819    8316 start_flags.go:306] config:
	{Name:force-systemd-flag-20220516225238-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:force-systemd-flag-20220516225238-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:52:46.620710    8316 out.go:177] * Starting control plane node force-systemd-flag-20220516225238-2444 in cluster force-systemd-flag-20220516225238-2444
	I0516 22:52:46.624210    8316 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:52:46.626254    8316 out.go:177] * Pulling base image ...
	I0516 22:52:46.629246    8316 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 22:52:46.629246    8316 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:52:46.629246    8316 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 22:52:46.630262    8316 cache.go:57] Caching tarball of preloaded images
	I0516 22:52:46.630413    8316 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 22:52:46.630413    8316 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 22:52:46.630980    8316 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-flag-20220516225238-2444\config.json ...
	I0516 22:52:46.631192    8316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-flag-20220516225238-2444\config.json: {Name:mka3a6c3d8b2fb99664f0e0126f5858a6c7ce7e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 22:52:47.724933    8316 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:52:47.725103    8316 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:52:47.725434    8316 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:52:47.725434    8316 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:52:47.725604    8316 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:52:47.725665    8316 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:52:47.725808    8316 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:52:47.725808    8316 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:52:47.725808    8316 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:52:50.027242    8316 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-612029279: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-612029279: read-only file system"}
	I0516 22:52:50.027317    8316 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:52:50.027371    8316 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:52:50.027519    8316 start.go:352] acquiring machines lock for force-systemd-flag-20220516225238-2444: {Name:mk40c438ade02e8947a33971af3c4bfe9223e4de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:52:50.027519    8316 start.go:356] acquired machines lock for "force-systemd-flag-20220516225238-2444" in 0s
	I0516 22:52:50.027519    8316 start.go:91] Provisioning new machine with config: &{Name:force-systemd-flag-20220516225238-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:force-systemd-flag-20220516225238-2444 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8
443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 22:52:50.027519    8316 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:52:50.032833    8316 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 22:52:50.033383    8316 start.go:165] libmachine.API.Create for "force-systemd-flag-20220516225238-2444" (driver="docker")
	I0516 22:52:50.033478    8316 client.go:168] LocalClient.Create starting
	I0516 22:52:50.033613    8316 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:52:50.033613    8316 main.go:134] libmachine: Decoding PEM data...
	I0516 22:52:50.033613    8316 main.go:134] libmachine: Parsing certificate...
	I0516 22:52:50.034414    8316 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:52:50.034414    8316 main.go:134] libmachine: Decoding PEM data...
	I0516 22:52:50.034414    8316 main.go:134] libmachine: Parsing certificate...
	I0516 22:52:50.047407    8316 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220516225238-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:52:51.201264    8316 cli_runner.go:211] docker network inspect force-systemd-flag-20220516225238-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:52:51.201450    8316 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220516225238-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1536149s)
	I0516 22:52:51.210754    8316 network_create.go:272] running [docker network inspect force-systemd-flag-20220516225238-2444] to gather additional debugging logs...
	I0516 22:52:51.210754    8316 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220516225238-2444
	W0516 22:52:52.333510    8316 cli_runner.go:211] docker network inspect force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:52:52.333762    8316 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220516225238-2444: (1.1227466s)
	I0516 22:52:52.333762    8316 network_create.go:275] error running [docker network inspect force-systemd-flag-20220516225238-2444]: docker network inspect force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20220516225238-2444
	I0516 22:52:52.333821    8316 network_create.go:277] output of [docker network inspect force-systemd-flag-20220516225238-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20220516225238-2444
	
	** /stderr **
	I0516 22:52:52.347846    8316 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:52:53.448846    8316 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1009908s)
	I0516 22:52:53.472175    8316 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00083a478] misses:0}
	I0516 22:52:53.472284    8316 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:52:53.472358    8316 network_create.go:115] attempt to create docker network force-systemd-flag-20220516225238-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:52:53.481481    8316 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444
	W0516 22:52:54.575045    8316 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:52:54.575271    8316 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444: (1.0935554s)
	W0516 22:52:54.575271    8316 network_create.go:107] failed to create docker network force-systemd-flag-20220516225238-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:52:54.595830    8316 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00083a478] amended:false}} dirty:map[] misses:0}
	I0516 22:52:54.595830    8316 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:52:54.615999    8316 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00083a478] amended:true}} dirty:map[192.168.49.0:0xc00083a478 192.168.58.0:0xc00045cf00] misses:0}
	I0516 22:52:54.615999    8316 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:52:54.615999    8316 network_create.go:115] attempt to create docker network force-systemd-flag-20220516225238-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:52:54.625776    8316 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444
	W0516 22:52:55.710612    8316 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:52:55.710612    8316 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444: (1.0847722s)
	W0516 22:52:55.710612    8316 network_create.go:107] failed to create docker network force-systemd-flag-20220516225238-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:52:55.731412    8316 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00083a478] amended:true}} dirty:map[192.168.49.0:0xc00083a478 192.168.58.0:0xc00045cf00] misses:1}
	I0516 22:52:55.732215    8316 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:52:55.751184    8316 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00083a478] amended:true}} dirty:map[192.168.49.0:0xc00083a478 192.168.58.0:0xc00045cf00 192.168.67.0:0xc00083a510] misses:1}
	I0516 22:52:55.751184    8316 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:52:55.751184    8316 network_create.go:115] attempt to create docker network force-systemd-flag-20220516225238-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:52:55.760855    8316 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444
	W0516 22:52:56.843639    8316 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:52:56.843933    8316 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444: (1.0827494s)
	W0516 22:52:56.843933    8316 network_create.go:107] failed to create docker network force-systemd-flag-20220516225238-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:52:56.863083    8316 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00083a478] amended:true}} dirty:map[192.168.49.0:0xc00083a478 192.168.58.0:0xc00045cf00 192.168.67.0:0xc00083a510] misses:2}
	I0516 22:52:56.864096    8316 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:52:56.885496    8316 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00083a478] amended:true}} dirty:map[192.168.49.0:0xc00083a478 192.168.58.0:0xc00045cf00 192.168.67.0:0xc00083a510 192.168.76.0:0xc00045d3f8] misses:2}
	I0516 22:52:56.885496    8316 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:52:56.885777    8316 network_create.go:115] attempt to create docker network force-systemd-flag-20220516225238-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:52:56.896791    8316 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444
	W0516 22:52:58.025031    8316 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:52:58.025104    8316 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444: (1.1280222s)
	E0516 22:52:58.025135    8316 network_create.go:104] error while trying to create docker network force-systemd-flag-20220516225238-2444 192.168.76.0/24: create docker network force-systemd-flag-20220516225238-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d954642983dabbc25fecb0e672c11cfb2e66d4455c199f68564f2a4d57183c8d (br-d954642983da): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:52:58.025135    8316 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-flag-20220516225238-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d954642983dabbc25fecb0e672c11cfb2e66d4455c199f68564f2a4d57183c8d (br-d954642983da): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-flag-20220516225238-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d954642983dabbc25fecb0e672c11cfb2e66d4455c199f68564f2a4d57183c8d (br-d954642983da): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:52:58.043562    8316 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:52:59.160423    8316 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1168521s)
	I0516 22:52:59.168679    8316 cli_runner.go:164] Run: docker volume create force-systemd-flag-20220516225238-2444 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220516225238-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:53:00.275646    8316 cli_runner.go:211] docker volume create force-systemd-flag-20220516225238-2444 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220516225238-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:53:00.275646    8316 cli_runner.go:217] Completed: docker volume create force-systemd-flag-20220516225238-2444 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220516225238-2444 --label created_by.minikube.sigs.k8s.io=true: (1.1069579s)
	I0516 22:53:00.275646    8316 client.go:171] LocalClient.Create took 10.2420833s
	I0516 22:53:02.301803    8316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:53:02.316917    8316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444
	W0516 22:53:03.423364    8316 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:53:03.423364    8316 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: (1.1063557s)
	I0516 22:53:03.423364    8316 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:03.711931    8316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444
	W0516 22:53:04.812914    8316 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:53:04.813092    8316 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: (1.1009736s)
	W0516 22:53:04.813092    8316 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	
	W0516 22:53:04.813092    8316 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:04.826211    8316 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:53:04.834220    8316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444
	W0516 22:53:05.936615    8316 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:53:05.936615    8316 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: (1.1023866s)
	I0516 22:53:05.936615    8316 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:06.240759    8316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444
	W0516 22:53:07.428715    8316 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:53:07.428715    8316 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: (1.1879469s)
	W0516 22:53:07.428715    8316 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	
	W0516 22:53:07.428715    8316 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:07.428715    8316 start.go:134] duration metric: createHost completed in 17.4004576s
	I0516 22:53:07.428715    8316 start.go:81] releasing machines lock for "force-systemd-flag-20220516225238-2444", held for 17.4010523s
	W0516 22:53:07.429717    8316 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220516225238-2444 container: docker volume create force-systemd-flag-20220516225238-2444 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220516225238-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220516225238-2444: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220516225238-2444': mkdir /var/lib/docker/volumes/force-systemd-flag-20220516225238-2444: read-only file system
	I0516 22:53:07.444675    8316 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}
	W0516 22:53:08.584497    8316 cli_runner.go:211] docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:08.584497    8316 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: (1.139812s)
	I0516 22:53:08.584497    8316 delete.go:82] Unable to get host status for force-systemd-flag-20220516225238-2444, assuming it has already been deleted: state: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	W0516 22:53:08.584497    8316 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220516225238-2444 container: docker volume create force-systemd-flag-20220516225238-2444 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220516225238-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220516225238-2444: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220516225238-2444': mkdir /var/lib/docker/volumes/force-systemd-flag-20220516225238-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220516225238-2444 container: docker volume create force-systemd-flag-20220516225238-2444 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220516225238-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220516225238-2444: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220516225238-2444': mkdir /var/lib/docker/volumes/force-systemd-flag-20220516225238-2444: read-only file system
	
	I0516 22:53:08.584497    8316 start.go:623] Will try again in 5 seconds ...
	I0516 22:53:13.585663    8316 start.go:352] acquiring machines lock for force-systemd-flag-20220516225238-2444: {Name:mk40c438ade02e8947a33971af3c4bfe9223e4de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:53:13.585933    8316 start.go:356] acquired machines lock for "force-systemd-flag-20220516225238-2444" in 164.2µs
	I0516 22:53:13.586092    8316 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:53:13.586092    8316 fix.go:55] fixHost starting: 
	I0516 22:53:13.604779    8316 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}
	W0516 22:53:14.690373    8316 cli_runner.go:211] docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:14.690373    8316 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: (1.0855849s)
	I0516 22:53:14.690373    8316 fix.go:103] recreateIfNeeded on force-systemd-flag-20220516225238-2444: state= err=unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:14.690373    8316 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:53:14.693393    8316 out.go:177] * docker "force-systemd-flag-20220516225238-2444" container is missing, will recreate.
	I0516 22:53:14.697399    8316 delete.go:124] DEMOLISHING force-systemd-flag-20220516225238-2444 ...
	I0516 22:53:14.719743    8316 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}
	W0516 22:53:15.823786    8316 cli_runner.go:211] docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:15.823786    8316 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: (1.1038349s)
	W0516 22:53:15.823786    8316 stop.go:75] unable to get state: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:15.823786    8316 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:15.840242    8316 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}
	W0516 22:53:16.990163    8316 cli_runner.go:211] docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:16.990163    8316 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: (1.1497824s)
	I0516 22:53:16.990229    8316 delete.go:82] Unable to get host status for force-systemd-flag-20220516225238-2444, assuming it has already been deleted: state: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:16.998037    8316 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-20220516225238-2444
	W0516 22:53:18.119945    8316 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:53:18.120054    8316 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} force-systemd-flag-20220516225238-2444: (1.1217962s)
	I0516 22:53:18.120110    8316 kic.go:356] could not find the container force-systemd-flag-20220516225238-2444 to remove it. will try anyways
	I0516 22:53:18.128650    8316 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}
	W0516 22:53:19.261047    8316 cli_runner.go:211] docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:19.261176    8316 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: (1.1323401s)
	W0516 22:53:19.261292    8316 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:19.270237    8316 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-20220516225238-2444 /bin/bash -c "sudo init 0"
	W0516 22:53:20.399506    8316 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-20220516225238-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:53:20.399536    8316 cli_runner.go:217] Completed: docker exec --privileged -t force-systemd-flag-20220516225238-2444 /bin/bash -c "sudo init 0": (1.1290158s)
	I0516 22:53:20.399609    8316 oci.go:641] error shutdown force-systemd-flag-20220516225238-2444: docker exec --privileged -t force-systemd-flag-20220516225238-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:21.424343    8316 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}
	W0516 22:53:22.576588    8316 cli_runner.go:211] docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:22.576588    8316 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: (1.1516316s)
	I0516 22:53:22.576588    8316 oci.go:653] temporary error verifying shutdown: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:22.576588    8316 oci.go:655] temporary error: container force-systemd-flag-20220516225238-2444 status is  but expect it to be exited
	I0516 22:53:22.576588    8316 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:23.059946    8316 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}
	W0516 22:53:24.251170    8316 cli_runner.go:211] docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:24.251359    8316 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: (1.1910977s)
	I0516 22:53:24.251491    8316 oci.go:653] temporary error verifying shutdown: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:24.251491    8316 oci.go:655] temporary error: container force-systemd-flag-20220516225238-2444 status is  but expect it to be exited
	I0516 22:53:24.251659    8316 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:25.154667    8316 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}
	W0516 22:53:26.282106    8316 cli_runner.go:211] docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:26.282228    8316 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: (1.1274303s)
	I0516 22:53:26.282372    8316 oci.go:653] temporary error verifying shutdown: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:26.282372    8316 oci.go:655] temporary error: container force-systemd-flag-20220516225238-2444 status is  but expect it to be exited
	I0516 22:53:26.282454    8316 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:26.940031    8316 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}
	W0516 22:53:28.089210    8316 cli_runner.go:211] docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:28.089320    8316 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: (1.1491207s)
	I0516 22:53:28.089420    8316 oci.go:653] temporary error verifying shutdown: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:28.089420    8316 oci.go:655] temporary error: container force-systemd-flag-20220516225238-2444 status is  but expect it to be exited
	I0516 22:53:28.089469    8316 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:29.224450    8316 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}
	W0516 22:53:30.337796    8316 cli_runner.go:211] docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:30.337796    8316 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: (1.113273s)
	I0516 22:53:30.337796    8316 oci.go:653] temporary error verifying shutdown: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:30.337796    8316 oci.go:655] temporary error: container force-systemd-flag-20220516225238-2444 status is  but expect it to be exited
	I0516 22:53:30.337796    8316 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:31.868395    8316 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}
	W0516 22:53:32.940896    8316 cli_runner.go:211] docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:32.940896    8316 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: (1.072493s)
	I0516 22:53:32.940896    8316 oci.go:653] temporary error verifying shutdown: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:32.940896    8316 oci.go:655] temporary error: container force-systemd-flag-20220516225238-2444 status is  but expect it to be exited
	I0516 22:53:32.940896    8316 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:36.003884    8316 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}
	W0516 22:53:37.126575    8316 cli_runner.go:211] docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:37.126575    8316 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: (1.1226824s)
	I0516 22:53:37.126575    8316 oci.go:653] temporary error verifying shutdown: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:37.126575    8316 oci.go:655] temporary error: container force-systemd-flag-20220516225238-2444 status is  but expect it to be exited
	I0516 22:53:37.126575    8316 oci.go:88] couldn't shut down force-systemd-flag-20220516225238-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	 
	I0516 22:53:37.126575    8316 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-20220516225238-2444
	I0516 22:53:38.217534    8316 cli_runner.go:217] Completed: docker rm -f -v force-systemd-flag-20220516225238-2444: (1.0909499s)
	I0516 22:53:38.224535    8316 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-20220516225238-2444
	W0516 22:53:39.323304    8316 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:53:39.323304    8316 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} force-systemd-flag-20220516225238-2444: (1.09876s)
	I0516 22:53:39.334342    8316 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220516225238-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:53:40.406342    8316 cli_runner.go:211] docker network inspect force-systemd-flag-20220516225238-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:53:40.406526    8316 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220516225238-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0719432s)
	I0516 22:53:40.414714    8316 network_create.go:272] running [docker network inspect force-systemd-flag-20220516225238-2444] to gather additional debugging logs...
	I0516 22:53:40.414714    8316 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220516225238-2444
	W0516 22:53:41.467976    8316 cli_runner.go:211] docker network inspect force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:53:41.468200    8316 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220516225238-2444: (1.0532055s)
	I0516 22:53:41.468456    8316 network_create.go:275] error running [docker network inspect force-systemd-flag-20220516225238-2444]: docker network inspect force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20220516225238-2444
	I0516 22:53:41.468515    8316 network_create.go:277] output of [docker network inspect force-systemd-flag-20220516225238-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20220516225238-2444
	
	** /stderr **
	W0516 22:53:41.470142    8316 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:53:41.470142    8316 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:53:42.473724    8316 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:53:42.477170    8316 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 22:53:42.477855    8316 start.go:165] libmachine.API.Create for "force-systemd-flag-20220516225238-2444" (driver="docker")
	I0516 22:53:42.477855    8316 client.go:168] LocalClient.Create starting
	I0516 22:53:42.477855    8316 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:53:42.477855    8316 main.go:134] libmachine: Decoding PEM data...
	I0516 22:53:42.477855    8316 main.go:134] libmachine: Parsing certificate...
	I0516 22:53:42.477855    8316 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:53:42.477855    8316 main.go:134] libmachine: Decoding PEM data...
	I0516 22:53:42.477855    8316 main.go:134] libmachine: Parsing certificate...
	I0516 22:53:42.493082    8316 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220516225238-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:53:43.562094    8316 cli_runner.go:211] docker network inspect force-systemd-flag-20220516225238-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:53:43.562252    8316 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220516225238-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.068716s)
	I0516 22:53:43.576812    8316 network_create.go:272] running [docker network inspect force-systemd-flag-20220516225238-2444] to gather additional debugging logs...
	I0516 22:53:43.576812    8316 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220516225238-2444
	W0516 22:53:44.672398    8316 cli_runner.go:211] docker network inspect force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:53:44.672467    8316 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220516225238-2444: (1.0954197s)
	I0516 22:53:44.672467    8316 network_create.go:275] error running [docker network inspect force-systemd-flag-20220516225238-2444]: docker network inspect force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20220516225238-2444
	I0516 22:53:44.672467    8316 network_create.go:277] output of [docker network inspect force-systemd-flag-20220516225238-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20220516225238-2444
	
	** /stderr **
	I0516 22:53:44.680872    8316 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:53:45.790165    8316 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1090902s)
	I0516 22:53:45.806329    8316 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00083a478] amended:true}} dirty:map[192.168.49.0:0xc00083a478 192.168.58.0:0xc00045cf00 192.168.67.0:0xc00083a510 192.168.76.0:0xc00045d3f8] misses:2}
	I0516 22:53:45.806329    8316 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:45.823349    8316 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00083a478] amended:true}} dirty:map[192.168.49.0:0xc00083a478 192.168.58.0:0xc00045cf00 192.168.67.0:0xc00083a510 192.168.76.0:0xc00045d3f8] misses:3}
	I0516 22:53:45.823349    8316 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:45.840984    8316 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00083a478 192.168.58.0:0xc00045cf00 192.168.67.0:0xc00083a510 192.168.76.0:0xc00045d3f8] amended:false}} dirty:map[] misses:0}
	I0516 22:53:45.840984    8316 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:45.856055    8316 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00083a478 192.168.58.0:0xc00045cf00 192.168.67.0:0xc00083a510 192.168.76.0:0xc00045d3f8] amended:false}} dirty:map[] misses:0}
	I0516 22:53:45.856055    8316 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:45.874320    8316 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00083a478 192.168.58.0:0xc00045cf00 192.168.67.0:0xc00083a510 192.168.76.0:0xc00045d3f8] amended:true}} dirty:map[192.168.49.0:0xc00083a478 192.168.58.0:0xc00045cf00 192.168.67.0:0xc00083a510 192.168.76.0:0xc00045d3f8 192.168.85.0:0xc0005c4420] misses:0}
	I0516 22:53:45.875076    8316 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:45.875076    8316 network_create.go:115] attempt to create docker network force-systemd-flag-20220516225238-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:53:45.885739    8316 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444
	W0516 22:53:47.032927    8316 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:53:47.032927    8316 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444: (1.1471787s)
	E0516 22:53:47.032927    8316 network_create.go:104] error while trying to create docker network force-systemd-flag-20220516225238-2444 192.168.85.0/24: create docker network force-systemd-flag-20220516225238-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 73b9d268a96b6331e3d2ca499a74a97bc78852d7b0007662e53e58b124ba315f (br-73b9d268a96b): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:53:47.032927    8316 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-flag-20220516225238-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 73b9d268a96b6331e3d2ca499a74a97bc78852d7b0007662e53e58b124ba315f (br-73b9d268a96b): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-flag-20220516225238-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 73b9d268a96b6331e3d2ca499a74a97bc78852d7b0007662e53e58b124ba315f (br-73b9d268a96b): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:53:47.050971    8316 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:53:48.148851    8316 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0978085s)
	I0516 22:53:48.158605    8316 cli_runner.go:164] Run: docker volume create force-systemd-flag-20220516225238-2444 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220516225238-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:53:49.253573    8316 cli_runner.go:211] docker volume create force-systemd-flag-20220516225238-2444 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220516225238-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:53:49.253749    8316 cli_runner.go:217] Completed: docker volume create force-systemd-flag-20220516225238-2444 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220516225238-2444 --label created_by.minikube.sigs.k8s.io=true: (1.094959s)
	I0516 22:53:49.253834    8316 client.go:171] LocalClient.Create took 6.7759225s
	I0516 22:53:51.270552    8316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:53:51.279743    8316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444
	W0516 22:53:52.392800    8316 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:53:52.392800    8316 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: (1.1129867s)
	I0516 22:53:52.392800    8316 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:52.731774    8316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444
	W0516 22:53:53.866043    8316 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:53:53.866185    8316 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: (1.1342598s)
	W0516 22:53:53.866185    8316 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	
	W0516 22:53:53.866185    8316 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:53.877854    8316 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:53:53.887219    8316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444
	W0516 22:53:54.988483    8316 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:53:54.988483    8316 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: (1.1012547s)
	I0516 22:53:54.988483    8316 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:55.224878    8316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444
	W0516 22:53:56.301276    8316 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:53:56.301276    8316 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: (1.076304s)
	W0516 22:53:56.301276    8316 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	
	W0516 22:53:56.301276    8316 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:56.301276    8316 start.go:134] duration metric: createHost completed in 13.8272169s
	I0516 22:53:56.317485    8316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:53:56.324798    8316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444
	W0516 22:53:57.369610    8316 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:53:57.369610    8316 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: (1.0446771s)
	I0516 22:53:57.369610    8316 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:57.629926    8316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444
	W0516 22:53:58.671783    8316 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:53:58.671783    8316 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: (1.0418488s)
	W0516 22:53:58.671783    8316 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	
	W0516 22:53:58.671783    8316 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:58.681769    8316 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:53:58.689533    8316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444
	W0516 22:53:59.713785    8316 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:53:59.713785    8316 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: (1.0242431s)
	I0516 22:53:59.713785    8316 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:53:59.925754    8316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444
	W0516 22:54:01.035373    8316 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444 returned with exit code 1
	I0516 22:54:01.035373    8316 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: (1.1094873s)
	W0516 22:54:01.035373    8316 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	
	W0516 22:54:01.035373    8316 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220516225238-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220516225238-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	I0516 22:54:01.035373    8316 fix.go:57] fixHost completed within 47.4488872s
	I0516 22:54:01.035373    8316 start.go:81] releasing machines lock for "force-systemd-flag-20220516225238-2444", held for 47.4490173s
	W0516 22:54:01.036017    8316 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-20220516225238-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220516225238-2444 container: docker volume create force-systemd-flag-20220516225238-2444 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220516225238-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220516225238-2444: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220516225238-2444': mkdir /var/lib/docker/volumes/force-systemd-flag-20220516225238-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-20220516225238-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220516225238-2444 container: docker volume create force-systemd-flag-20220516225238-2444 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220516225238-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220516225238-2444: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220516225238-2444': mkdir /var/lib/docker/volumes/force-systemd-flag-20220516225238-2444: read-only file system
	
	I0516 22:54:01.041149    8316 out.go:177] 
	W0516 22:54:01.044524    8316 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220516225238-2444 container: docker volume create force-systemd-flag-20220516225238-2444 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220516225238-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220516225238-2444: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220516225238-2444': mkdir /var/lib/docker/volumes/force-systemd-flag-20220516225238-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220516225238-2444 container: docker volume create force-systemd-flag-20220516225238-2444 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220516225238-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220516225238-2444: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220516225238-2444': mkdir /var/lib/docker/volumes/force-systemd-flag-20220516225238-2444: read-only file system
	
	W0516 22:54:01.044524    8316 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 22:54:01.045042    8316 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 22:54:01.047931    8316 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:87: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-flag-20220516225238-2444 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker" : exit status 60
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-20220516225238-2444 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p force-systemd-flag-20220516225238-2444 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (3.2800167s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2837ebd22544166cf14c5e2e977cc80019e59e54_2.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-windows-amd64.exe -p force-systemd-flag-20220516225238-2444 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:100: *** TestForceSystemdFlag FAILED at 2022-05-16 22:54:04.4375588 +0000 GMT m=+3512.104936801
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-20220516225238-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect force-systemd-flag-20220516225238-2444: exit status 1 (1.1599212s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: force-systemd-flag-20220516225238-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-20220516225238-2444 -n force-systemd-flag-20220516225238-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-20220516225238-2444 -n force-systemd-flag-20220516225238-2444: exit status 7 (2.908985s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:54:08.488317    6292 status.go:247] status error: host: state: unknown state "force-systemd-flag-20220516225238-2444": docker container inspect force-systemd-flag-20220516225238-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220516225238-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-20220516225238-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-20220516225238-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220516225238-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220516225238-2444: (8.488712s)
--- FAIL: TestForceSystemdFlag (98.42s)

                                                
                                    
x
+
TestForceSystemdEnv (97.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-20220516225309-2444 --memory=2048 --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-env-20220516225309-2444 --memory=2048 --alsologtostderr -v=5 --driver=docker: exit status 60 (1m21.5202327s)

                                                
                                                
-- stdout --
	* [force-systemd-env-20220516225309-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node force-systemd-env-20220516225309-2444 in cluster force-systemd-env-20220516225309-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-20220516225309-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:53:09.846889    3636 out.go:296] Setting OutFile to fd 1504 ...
	I0516 22:53:09.909355    3636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:53:09.909355    3636 out.go:309] Setting ErrFile to fd 1448...
	I0516 22:53:09.909355    3636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:53:09.924090    3636 out.go:303] Setting JSON to false
	I0516 22:53:09.926699    3636 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4702,"bootTime":1652736887,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:53:09.926699    3636 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:53:09.933878    3636 out.go:177] * [force-systemd-env-20220516225309-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:53:09.939615    3636 notify.go:193] Checking for updates...
	I0516 22:53:09.941598    3636 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:53:09.944561    3636 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:53:09.947384    3636 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:53:09.949680    3636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:53:09.952882    3636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0516 22:53:09.956610    3636 config.go:178] Loaded profile config "force-systemd-flag-20220516225238-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:53:09.957016    3636 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:53:09.957408    3636 config.go:178] Loaded profile config "pause-20220516225202-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:53:09.957408    3636 config.go:178] Loaded profile config "running-upgrade-20220516224826-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0516 22:53:09.957408    3636 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:53:12.701647    3636 docker.go:137] docker version: linux-20.10.14
	I0516 22:53:12.714186    3636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:53:14.827443    3636 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1131926s)
	I0516 22:53:14.827770    3636 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:53:13.7541241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:53:14.832399    3636 out.go:177] * Using the docker driver based on user configuration
	I0516 22:53:14.834527    3636 start.go:284] selected driver: docker
	I0516 22:53:14.834527    3636 start.go:806] validating driver "docker" against <nil>
	I0516 22:53:14.834527    3636 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:53:14.904814    3636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:53:17.129559    3636 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2246404s)
	I0516 22:53:17.130121    3636 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:53:15.9953492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:53:17.130519    3636 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 22:53:17.131462    3636 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0516 22:53:17.136075    3636 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 22:53:17.138981    3636 cni.go:95] Creating CNI manager for ""
	I0516 22:53:17.138981    3636 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 22:53:17.138981    3636 start_flags.go:306] config:
	{Name:force-systemd-env-20220516225309-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:force-systemd-env-20220516225309-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:53:17.142731    3636 out.go:177] * Starting control plane node force-systemd-env-20220516225309-2444 in cluster force-systemd-env-20220516225309-2444
	I0516 22:53:17.144337    3636 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:53:17.147760    3636 out.go:177] * Pulling base image ...
	I0516 22:53:17.149779    3636 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 22:53:17.149779    3636 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:53:17.149779    3636 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 22:53:17.149779    3636 cache.go:57] Caching tarball of preloaded images
	I0516 22:53:17.150763    3636 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 22:53:17.150763    3636 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 22:53:17.150763    3636 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-20220516225309-2444\config.json ...
	I0516 22:53:17.150763    3636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-20220516225309-2444\config.json: {Name:mkfc98f02ef07df6d223b92cab6a3183587448b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 22:53:18.276076    3636 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:53:18.276076    3636 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:53:18.276076    3636 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:53:18.276076    3636 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:53:18.276076    3636 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:53:18.276076    3636 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:53:18.276076    3636 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:53:18.276076    3636 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:53:18.276076    3636 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:53:20.701734    3636 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-080226098: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-080226098: read-only file system"}
	I0516 22:53:20.701734    3636 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:53:20.701734    3636 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:53:20.701734    3636 start.go:352] acquiring machines lock for force-systemd-env-20220516225309-2444: {Name:mk218ff504e90badcc3a03f83806d76b6ab790a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:53:20.701734    3636 start.go:356] acquired machines lock for "force-systemd-env-20220516225309-2444" in 0s
	I0516 22:53:20.701734    3636 start.go:91] Provisioning new machine with config: &{Name:force-systemd-env-20220516225309-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:force-systemd-env-20220516225309-2444 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:844
3 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 22:53:20.701734    3636 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:53:20.705638    3636 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 22:53:20.705638    3636 start.go:165] libmachine.API.Create for "force-systemd-env-20220516225309-2444" (driver="docker")
	I0516 22:53:20.705638    3636 client.go:168] LocalClient.Create starting
	I0516 22:53:20.706641    3636 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:53:20.706641    3636 main.go:134] libmachine: Decoding PEM data...
	I0516 22:53:20.706641    3636 main.go:134] libmachine: Parsing certificate...
	I0516 22:53:20.706641    3636 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:53:20.707366    3636 main.go:134] libmachine: Decoding PEM data...
	I0516 22:53:20.707506    3636 main.go:134] libmachine: Parsing certificate...
	I0516 22:53:20.716488    3636 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:53:21.826571    3636 cli_runner.go:211] docker network inspect force-systemd-env-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:53:21.826673    3636 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1098608s)
	I0516 22:53:21.836204    3636 network_create.go:272] running [docker network inspect force-systemd-env-20220516225309-2444] to gather additional debugging logs...
	I0516 22:53:21.836204    3636 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220516225309-2444
	W0516 22:53:22.987631    3636 cli_runner.go:211] docker network inspect force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:53:22.987694    3636 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220516225309-2444: (1.1513588s)
	I0516 22:53:22.987779    3636 network_create.go:275] error running [docker network inspect force-systemd-env-20220516225309-2444]: docker network inspect force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20220516225309-2444
	I0516 22:53:22.987826    3636 network_create.go:277] output of [docker network inspect force-systemd-env-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20220516225309-2444
	
	** /stderr **
	I0516 22:53:22.997652    3636 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:53:24.296779    3636 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2991163s)
	I0516 22:53:24.321695    3636 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0009f4250] misses:0}
	I0516 22:53:24.321784    3636 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:24.321784    3636 network_create.go:115] attempt to create docker network force-systemd-env-20220516225309-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:53:24.328411    3636 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444
	W0516 22:53:25.430021    3636 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:53:25.430061    3636 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444: (1.1014301s)
	W0516 22:53:25.430127    3636 network_create.go:107] failed to create docker network force-systemd-env-20220516225309-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:53:25.451612    3636 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009f4250] amended:false}} dirty:map[] misses:0}
	I0516 22:53:25.451612    3636 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:25.471670    3636 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009f4250] amended:true}} dirty:map[192.168.49.0:0xc0009f4250 192.168.58.0:0xc0001ac3a0] misses:0}
	I0516 22:53:25.471670    3636 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:25.471670    3636 network_create.go:115] attempt to create docker network force-systemd-env-20220516225309-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:53:25.478985    3636 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444
	W0516 22:53:26.581705    3636 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:53:26.581909    3636 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444: (1.1017126s)
	W0516 22:53:26.581963    3636 network_create.go:107] failed to create docker network force-systemd-env-20220516225309-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:53:26.601024    3636 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009f4250] amended:true}} dirty:map[192.168.49.0:0xc0009f4250 192.168.58.0:0xc0001ac3a0] misses:1}
	I0516 22:53:26.602030    3636 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:26.623305    3636 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009f4250] amended:true}} dirty:map[192.168.49.0:0xc0009f4250 192.168.58.0:0xc0001ac3a0 192.168.67.0:0xc0009f43c8] misses:1}
	I0516 22:53:26.623305    3636 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:26.623305    3636 network_create.go:115] attempt to create docker network force-systemd-env-20220516225309-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:53:26.635139    3636 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444
	W0516 22:53:27.730593    3636 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:53:27.730593    3636 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444: (1.0954458s)
	W0516 22:53:27.730593    3636 network_create.go:107] failed to create docker network force-systemd-env-20220516225309-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:53:27.751092    3636 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009f4250] amended:true}} dirty:map[192.168.49.0:0xc0009f4250 192.168.58.0:0xc0001ac3a0 192.168.67.0:0xc0009f43c8] misses:2}
	I0516 22:53:27.751092    3636 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:27.768911    3636 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009f4250] amended:true}} dirty:map[192.168.49.0:0xc0009f4250 192.168.58.0:0xc0001ac3a0 192.168.67.0:0xc0009f43c8 192.168.76.0:0xc000006168] misses:2}
	I0516 22:53:27.769955    3636 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:27.769994    3636 network_create.go:115] attempt to create docker network force-systemd-env-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:53:27.778924    3636 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444
	W0516 22:53:28.889284    3636 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:53:28.889447    3636 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444: (1.1091502s)
	E0516 22:53:28.889558    3636 network_create.go:104] error while trying to create docker network force-systemd-env-20220516225309-2444 192.168.76.0/24: create docker network force-systemd-env-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 51ed04daf1c5e5881975d0e8da0d368e4e73d8d91d7901079a420652e2342a36 (br-51ed04daf1c5): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:53:28.889880    3636 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-env-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 51ed04daf1c5e5881975d0e8da0d368e4e73d8d91d7901079a420652e2342a36 (br-51ed04daf1c5): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-env-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 51ed04daf1c5e5881975d0e8da0d368e4e73d8d91d7901079a420652e2342a36 (br-51ed04daf1c5): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:53:28.907691    3636 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:53:30.026641    3636 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1188504s)
	I0516 22:53:30.035555    3636 cli_runner.go:164] Run: docker volume create force-systemd-env-20220516225309-2444 --label name.minikube.sigs.k8s.io=force-systemd-env-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:53:31.227387    3636 cli_runner.go:211] docker volume create force-systemd-env-20220516225309-2444 --label name.minikube.sigs.k8s.io=force-systemd-env-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:53:31.227462    3636 cli_runner.go:217] Completed: docker volume create force-systemd-env-20220516225309-2444 --label name.minikube.sigs.k8s.io=force-systemd-env-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: (1.1917086s)
	I0516 22:53:31.227536    3636 client.go:171] LocalClient.Create took 10.5217729s
	I0516 22:53:33.250416    3636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:53:33.257285    3636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444
	W0516 22:53:34.331961    3636 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:53:34.332005    3636 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: (1.074421s)
	I0516 22:53:34.332182    3636 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:34.626939    3636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444
	W0516 22:53:35.663320    3636 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:53:35.663320    3636 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: (1.0363722s)
	W0516 22:53:35.663320    3636 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	
	W0516 22:53:35.663320    3636 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:35.677255    3636 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:53:35.684438    3636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444
	W0516 22:53:36.753484    3636 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:53:36.753484    3636 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: (1.0690371s)
	I0516 22:53:36.753484    3636 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:37.059306    3636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444
	W0516 22:53:38.172001    3636 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:53:38.172071    3636 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: (1.1125598s)
	W0516 22:53:38.172369    3636 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	
	W0516 22:53:38.172445    3636 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:38.172477    3636 start.go:134] duration metric: createHost completed in 17.4705982s
	I0516 22:53:38.172508    3636 start.go:81] releasing machines lock for "force-systemd-env-20220516225309-2444", held for 17.4706292s
	W0516 22:53:38.172664    3636 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220516225309-2444 container: docker volume create force-systemd-env-20220516225309-2444 --label name.minikube.sigs.k8s.io=force-systemd-env-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220516225309-2444': mkdir /var/lib/docker/volumes/force-systemd-env-20220516225309-2444: read-only file system
	I0516 22:53:38.192320    3636 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}
	W0516 22:53:39.291234    3636 cli_runner.go:211] docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:39.291234    3636 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: (1.0989046s)
	I0516 22:53:39.291234    3636 delete.go:82] Unable to get host status for force-systemd-env-20220516225309-2444, assuming it has already been deleted: state: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	W0516 22:53:39.291234    3636 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220516225309-2444 container: docker volume create force-systemd-env-20220516225309-2444 --label name.minikube.sigs.k8s.io=force-systemd-env-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220516225309-2444': mkdir /var/lib/docker/volumes/force-systemd-env-20220516225309-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220516225309-2444 container: docker volume create force-systemd-env-20220516225309-2444 --label name.minikube.sigs.k8s.io=force-systemd-env-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220516225309-2444': mkdir /var/lib/docker/volumes/force-systemd-env-20220516225309-2444: read-only file system
	
	I0516 22:53:39.291234    3636 start.go:623] Will try again in 5 seconds ...
	I0516 22:53:44.296466    3636 start.go:352] acquiring machines lock for force-systemd-env-20220516225309-2444: {Name:mk218ff504e90badcc3a03f83806d76b6ab790a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:53:44.296466    3636 start.go:356] acquired machines lock for "force-systemd-env-20220516225309-2444" in 0s
	I0516 22:53:44.296466    3636 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:53:44.296466    3636 fix.go:55] fixHost starting: 
	I0516 22:53:44.315087    3636 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}
	W0516 22:53:45.397520    3636 cli_runner.go:211] docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:45.397587    3636 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: (1.0823599s)
	I0516 22:53:45.397645    3636 fix.go:103] recreateIfNeeded on force-systemd-env-20220516225309-2444: state= err=unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:45.397722    3636 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:53:45.403813    3636 out.go:177] * docker "force-systemd-env-20220516225309-2444" container is missing, will recreate.
	I0516 22:53:45.405781    3636 delete.go:124] DEMOLISHING force-systemd-env-20220516225309-2444 ...
	I0516 22:53:45.422788    3636 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}
	W0516 22:53:46.512917    3636 cli_runner.go:211] docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:46.512917    3636 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: (1.0901206s)
	W0516 22:53:46.512917    3636 stop.go:75] unable to get state: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:46.512917    3636 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:46.528966    3636 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}
	W0516 22:53:47.629147    3636 cli_runner.go:211] docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:47.629147    3636 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: (1.1001719s)
	I0516 22:53:47.629147    3636 delete.go:82] Unable to get host status for force-systemd-env-20220516225309-2444, assuming it has already been deleted: state: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:47.637141    3636 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-20220516225309-2444
	W0516 22:53:48.744503    3636 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:53:48.744503    3636 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} force-systemd-env-20220516225309-2444: (1.1073535s)
	I0516 22:53:48.744503    3636 kic.go:356] could not find the container force-systemd-env-20220516225309-2444 to remove it. will try anyways
	I0516 22:53:48.753913    3636 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}
	W0516 22:53:49.864962    3636 cli_runner.go:211] docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:49.865033    3636 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: (1.1109678s)
	W0516 22:53:49.865102    3636 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:49.873910    3636 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-20220516225309-2444 /bin/bash -c "sudo init 0"
	W0516 22:53:50.944290    3636 cli_runner.go:211] docker exec --privileged -t force-systemd-env-20220516225309-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:53:50.944541    3636 cli_runner.go:217] Completed: docker exec --privileged -t force-systemd-env-20220516225309-2444 /bin/bash -c "sudo init 0": (1.0703712s)
	I0516 22:53:50.944541    3636 oci.go:641] error shutdown force-systemd-env-20220516225309-2444: docker exec --privileged -t force-systemd-env-20220516225309-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:51.965749    3636 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}
	W0516 22:53:53.099969    3636 cli_runner.go:211] docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:53.100038    3636 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: (1.1340772s)
	I0516 22:53:53.100038    3636 oci.go:653] temporary error verifying shutdown: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:53.100038    3636 oci.go:655] temporary error: container force-systemd-env-20220516225309-2444 status is  but expect it to be exited
	I0516 22:53:53.100038    3636 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:53.581617    3636 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}
	W0516 22:53:54.750031    3636 cli_runner.go:211] docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:54.750031    3636 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: (1.1682609s)
	I0516 22:53:54.750031    3636 oci.go:653] temporary error verifying shutdown: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:54.750031    3636 oci.go:655] temporary error: container force-systemd-env-20220516225309-2444 status is  but expect it to be exited
	I0516 22:53:54.750031    3636 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:55.670667    3636 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}
	W0516 22:53:56.740849    3636 cli_runner.go:211] docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:56.740849    3636 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: (1.0700597s)
	I0516 22:53:56.740849    3636 oci.go:653] temporary error verifying shutdown: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:56.740849    3636 oci.go:655] temporary error: container force-systemd-env-20220516225309-2444 status is  but expect it to be exited
	I0516 22:53:56.740849    3636 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:57.391941    3636 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}
	W0516 22:53:58.449978    3636 cli_runner.go:211] docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:53:58.450067    3636 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: (1.0578003s)
	I0516 22:53:58.450067    3636 oci.go:653] temporary error verifying shutdown: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:58.450067    3636 oci.go:655] temporary error: container force-systemd-env-20220516225309-2444 status is  but expect it to be exited
	I0516 22:53:58.450067    3636 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:53:59.582017    3636 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}
	W0516 22:54:00.628466    3636 cli_runner.go:211] docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:00.628510    3636 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: (1.0464004s)
	I0516 22:54:00.628558    3636 oci.go:653] temporary error verifying shutdown: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:54:00.628558    3636 oci.go:655] temporary error: container force-systemd-env-20220516225309-2444 status is  but expect it to be exited
	I0516 22:54:00.628558    3636 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:54:02.158689    3636 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}
	W0516 22:54:03.240376    3636 cli_runner.go:211] docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:03.240439    3636 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: (1.0815186s)
	I0516 22:54:03.240439    3636 oci.go:653] temporary error verifying shutdown: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:54:03.240439    3636 oci.go:655] temporary error: container force-systemd-env-20220516225309-2444 status is  but expect it to be exited
	I0516 22:54:03.240439    3636 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:54:06.304149    3636 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}
	W0516 22:54:07.325839    3636 cli_runner.go:211] docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:07.325839    3636 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: (1.0216816s)
	I0516 22:54:07.325839    3636 oci.go:653] temporary error verifying shutdown: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:54:07.325839    3636 oci.go:655] temporary error: container force-systemd-env-20220516225309-2444 status is  but expect it to be exited
	I0516 22:54:07.325839    3636 oci.go:88] couldn't shut down force-systemd-env-20220516225309-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	 
	I0516 22:54:07.333860    3636 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-20220516225309-2444
	I0516 22:54:08.377975    3636 cli_runner.go:217] Completed: docker rm -f -v force-systemd-env-20220516225309-2444: (1.0438224s)
	I0516 22:54:08.387055    3636 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-20220516225309-2444
	W0516 22:54:09.463938    3636 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:54:09.463938    3636 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} force-systemd-env-20220516225309-2444: (1.0768748s)
	I0516 22:54:09.470955    3636 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:54:10.496913    3636 cli_runner.go:211] docker network inspect force-systemd-env-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:54:10.496913    3636 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0258016s)
	I0516 22:54:10.505428    3636 network_create.go:272] running [docker network inspect force-systemd-env-20220516225309-2444] to gather additional debugging logs...
	I0516 22:54:10.505428    3636 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220516225309-2444
	W0516 22:54:11.561583    3636 cli_runner.go:211] docker network inspect force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:54:11.561583    3636 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220516225309-2444: (1.0559043s)
	I0516 22:54:11.561583    3636 network_create.go:275] error running [docker network inspect force-systemd-env-20220516225309-2444]: docker network inspect force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20220516225309-2444
	I0516 22:54:11.561583    3636 network_create.go:277] output of [docker network inspect force-systemd-env-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20220516225309-2444
	
	** /stderr **
	W0516 22:54:11.563083    3636 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:54:11.563083    3636 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:54:12.570993    3636 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:54:12.574068    3636 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 22:54:12.574693    3636 start.go:165] libmachine.API.Create for "force-systemd-env-20220516225309-2444" (driver="docker")
	I0516 22:54:12.574794    3636 client.go:168] LocalClient.Create starting
	I0516 22:54:12.574794    3636 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:54:12.575396    3636 main.go:134] libmachine: Decoding PEM data...
	I0516 22:54:12.575396    3636 main.go:134] libmachine: Parsing certificate...
	I0516 22:54:12.575396    3636 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:54:12.575913    3636 main.go:134] libmachine: Decoding PEM data...
	I0516 22:54:12.575955    3636 main.go:134] libmachine: Parsing certificate...
	I0516 22:54:12.586540    3636 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:54:13.664498    3636 cli_runner.go:211] docker network inspect force-systemd-env-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:54:13.664498    3636 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0778986s)
	I0516 22:54:13.672493    3636 network_create.go:272] running [docker network inspect force-systemd-env-20220516225309-2444] to gather additional debugging logs...
	I0516 22:54:13.672493    3636 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220516225309-2444
	W0516 22:54:14.759105    3636 cli_runner.go:211] docker network inspect force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:54:14.759105    3636 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220516225309-2444: (1.0866033s)
	I0516 22:54:14.759105    3636 network_create.go:275] error running [docker network inspect force-systemd-env-20220516225309-2444]: docker network inspect force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20220516225309-2444
	I0516 22:54:14.759105    3636 network_create.go:277] output of [docker network inspect force-systemd-env-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20220516225309-2444
	
	** /stderr **
	I0516 22:54:14.767106    3636 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:54:15.846342    3636 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0792274s)
	I0516 22:54:15.862338    3636 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009f4250] amended:true}} dirty:map[192.168.49.0:0xc0009f4250 192.168.58.0:0xc0001ac3a0 192.168.67.0:0xc0009f43c8 192.168.76.0:0xc000006168] misses:2}
	I0516 22:54:15.862338    3636 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:15.878338    3636 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009f4250] amended:true}} dirty:map[192.168.49.0:0xc0009f4250 192.168.58.0:0xc0001ac3a0 192.168.67.0:0xc0009f43c8 192.168.76.0:0xc000006168] misses:3}
	I0516 22:54:15.878338    3636 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:15.893339    3636 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009f4250 192.168.58.0:0xc0001ac3a0 192.168.67.0:0xc0009f43c8 192.168.76.0:0xc000006168] amended:false}} dirty:map[] misses:0}
	I0516 22:54:15.893339    3636 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:15.908341    3636 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009f4250 192.168.58.0:0xc0001ac3a0 192.168.67.0:0xc0009f43c8 192.168.76.0:0xc000006168] amended:false}} dirty:map[] misses:0}
	I0516 22:54:15.908341    3636 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:15.923338    3636 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009f4250 192.168.58.0:0xc0001ac3a0 192.168.67.0:0xc0009f43c8 192.168.76.0:0xc000006168] amended:true}} dirty:map[192.168.49.0:0xc0009f4250 192.168.58.0:0xc0001ac3a0 192.168.67.0:0xc0009f43c8 192.168.76.0:0xc000006168 192.168.85.0:0xc0009f4498] misses:0}
	I0516 22:54:15.923338    3636 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:15.923338    3636 network_create.go:115] attempt to create docker network force-systemd-env-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:54:15.931338    3636 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444
	W0516 22:54:17.024605    3636 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:54:17.024605    3636 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444: (1.0932579s)
	E0516 22:54:17.024605    3636 network_create.go:104] error while trying to create docker network force-systemd-env-20220516225309-2444 192.168.85.0/24: create docker network force-systemd-env-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8d57d741c26ce00bb99f989918cb1942644d9c41add05af70543ca63a469c101 (br-8d57d741c26c): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:54:17.024605    3636 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-env-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8d57d741c26ce00bb99f989918cb1942644d9c41add05af70543ca63a469c101 (br-8d57d741c26c): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-env-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8d57d741c26ce00bb99f989918cb1942644d9c41add05af70543ca63a469c101 (br-8d57d741c26c): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:54:17.039606    3636 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:54:18.131856    3636 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0922414s)
	I0516 22:54:18.138856    3636 cli_runner.go:164] Run: docker volume create force-systemd-env-20220516225309-2444 --label name.minikube.sigs.k8s.io=force-systemd-env-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:54:19.194888    3636 cli_runner.go:211] docker volume create force-systemd-env-20220516225309-2444 --label name.minikube.sigs.k8s.io=force-systemd-env-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:54:19.194888    3636 cli_runner.go:217] Completed: docker volume create force-systemd-env-20220516225309-2444 --label name.minikube.sigs.k8s.io=force-systemd-env-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: (1.056023s)
	I0516 22:54:19.194888    3636 client.go:171] LocalClient.Create took 6.6200386s
	I0516 22:54:21.210828    3636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:54:21.219750    3636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444
	W0516 22:54:22.306605    3636 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:54:22.306605    3636 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: (1.08679s)
	I0516 22:54:22.306605    3636 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:54:22.647273    3636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444
	W0516 22:54:23.730583    3636 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:54:23.730583    3636 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: (1.0833003s)
	W0516 22:54:23.730583    3636 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	
	W0516 22:54:23.730583    3636 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:54:23.743905    3636 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:54:23.750998    3636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444
	W0516 22:54:24.885104    3636 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:54:24.885104    3636 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: (1.1340968s)
	I0516 22:54:24.885104    3636 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:54:25.130066    3636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444
	W0516 22:54:26.192663    3636 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:54:26.192663    3636 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: (1.0625883s)
	W0516 22:54:26.192663    3636 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	
	W0516 22:54:26.192663    3636 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:54:26.192663    3636 start.go:134] duration metric: createHost completed in 13.6215222s
	I0516 22:54:26.202668    3636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:54:26.210673    3636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444
	W0516 22:54:27.270254    3636 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:54:27.270254    3636 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: (1.0595717s)
	I0516 22:54:27.270254    3636 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:54:27.534857    3636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444
	W0516 22:54:28.645165    3636 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:54:28.645165    3636 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: (1.1102986s)
	W0516 22:54:28.645165    3636 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	
	W0516 22:54:28.645165    3636 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:54:28.658154    3636 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:54:28.667154    3636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444
	W0516 22:54:29.757787    3636 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:54:29.757842    3636 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: (1.0904481s)
	I0516 22:54:29.757842    3636 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:54:29.971470    3636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444
	W0516 22:54:31.073986    3636 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444 returned with exit code 1
	I0516 22:54:31.073986    3636 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: (1.1024436s)
	W0516 22:54:31.073986    3636 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	
	W0516 22:54:31.073986    3636 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	I0516 22:54:31.073986    3636 fix.go:57] fixHost completed within 46.7771312s
	I0516 22:54:31.073986    3636 start.go:81] releasing machines lock for "force-systemd-env-20220516225309-2444", held for 46.7771312s
	W0516 22:54:31.073986    3636 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-20220516225309-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220516225309-2444 container: docker volume create force-systemd-env-20220516225309-2444 --label name.minikube.sigs.k8s.io=force-systemd-env-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220516225309-2444': mkdir /var/lib/docker/volumes/force-systemd-env-20220516225309-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-20220516225309-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220516225309-2444 container: docker volume create force-systemd-env-20220516225309-2444 --label name.minikube.sigs.k8s.io=force-systemd-env-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220516225309-2444': mkdir /var/lib/docker/volumes/force-systemd-env-20220516225309-2444: read-only file system
	
	I0516 22:54:31.078991    3636 out.go:177] 
	W0516 22:54:31.081091    3636 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220516225309-2444 container: docker volume create force-systemd-env-20220516225309-2444 --label name.minikube.sigs.k8s.io=force-systemd-env-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220516225309-2444': mkdir /var/lib/docker/volumes/force-systemd-env-20220516225309-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220516225309-2444 container: docker volume create force-systemd-env-20220516225309-2444 --label name.minikube.sigs.k8s.io=force-systemd-env-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220516225309-2444': mkdir /var/lib/docker/volumes/force-systemd-env-20220516225309-2444: read-only file system
	
	W0516 22:54:31.082000    3636 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 22:54:31.082000    3636 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 22:54:31.084990    3636 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:152: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-env-20220516225309-2444 --memory=2048 --alsologtostderr -v=5 --driver=docker" : exit status 60
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20220516225309-2444 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:104: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p force-systemd-env-20220516225309-2444 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (3.3093419s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_201.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-windows-amd64.exe -p force-systemd-env-20220516225309-2444 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:161: *** TestForceSystemdEnv FAILED at 2022-05-16 22:54:34.5018298 +0000 GMT m=+3542.168957101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-20220516225309-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect force-systemd-env-20220516225309-2444: exit status 1 (1.2420418s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: force-systemd-env-20220516225309-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-20220516225309-2444 -n force-systemd-env-20220516225309-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-20220516225309-2444 -n force-systemd-env-20220516225309-2444: exit status 7 (2.9755321s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:54:38.699538    8004 status.go:247] status error: host: state: unknown state "force-systemd-env-20220516225309-2444": docker container inspect force-systemd-env-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220516225309-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-20220516225309-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-20220516225309-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-20220516225309-2444

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-20220516225309-2444: (8.5092987s)
--- FAIL: TestForceSystemdEnv (97.64s)

                                                
                                    
x
+
TestErrorSpam/setup (77.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-20220516215858-2444 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 --driver=docker
error_spam_test.go:78: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p nospam-20220516215858-2444 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 --driver=docker: exit status 60 (1m17.9055074s)

                                                
                                                
-- stdout --
	* [nospam-20220516215858-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node nospam-20220516215858-2444 in cluster nospam-20220516215858-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2250MB) ...
	* docker "nospam-20220516215858-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2250MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 21:59:15.891426    7756 network_create.go:104] error while trying to create docker network nospam-20220516215858-2444 192.168.76.0/24: create docker network nospam-20220516215858-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220516215858-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 915d52436722c2b7c9b0b3b4676f5739a0caada4aa47560d18fc19d6a804d523 (br-915d52436722): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220516215858-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220516215858-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 915d52436722c2b7c9b0b3b4676f5739a0caada4aa47560d18fc19d6a804d523 (br-915d52436722): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for nospam-20220516215858-2444 container: docker volume create nospam-20220516215858-2444 --label name.minikube.sigs.k8s.io=nospam-20220516215858-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create nospam-20220516215858-2444: error while creating volume root path '/var/lib/docker/volumes/nospam-20220516215858-2444': mkdir /var/lib/docker/volumes/nospam-20220516215858-2444: read-only file system
	
	E0516 22:00:02.402508    7756 network_create.go:104] error while trying to create docker network nospam-20220516215858-2444 192.168.85.0/24: create docker network nospam-20220516215858-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220516215858-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 05fb553e831a610ac4ce74441c5277f09c97d2cb5cc168057d8598e490072524 (br-05fb553e831a): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220516215858-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220516215858-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 05fb553e831a610ac4ce74441c5277f09c97d2cb5cc168057d8598e490072524 (br-05fb553e831a): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p nospam-20220516215858-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220516215858-2444 container: docker volume create nospam-20220516215858-2444 --label name.minikube.sigs.k8s.io=nospam-20220516215858-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create nospam-20220516215858-2444: error while creating volume root path '/var/lib/docker/volumes/nospam-20220516215858-2444': mkdir /var/lib/docker/volumes/nospam-20220516215858-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220516215858-2444 container: docker volume create nospam-20220516215858-2444 --label name.minikube.sigs.k8s.io=nospam-20220516215858-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create nospam-20220516215858-2444: error while creating volume root path '/var/lib/docker/volumes/nospam-20220516215858-2444': mkdir /var/lib/docker/volumes/nospam-20220516215858-2444: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
error_spam_test.go:80: "out/minikube-windows-amd64.exe start -p nospam-20220516215858-2444 -n=1 --memory=2250 --wait=false --log_dir=C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220516215858-2444 --driver=docker" failed: exit status 60
error_spam_test.go:93: unexpected stderr: "E0516 21:59:15.891426    7756 network_create.go:104] error while trying to create docker network nospam-20220516215858-2444 192.168.76.0/24: create docker network nospam-20220516215858-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220516215858-2444: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: cannot create network 915d52436722c2b7c9b0b3b4676f5739a0caada4aa47560d18fc19d6a804d523 (br-915d52436722): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4"
error_spam_test.go:93: unexpected stderr: "! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220516215858-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220516215858-2444: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: cannot create network 915d52436722c2b7c9b0b3b4676f5739a0caada4aa47560d18fc19d6a804d523 (br-915d52436722): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4"
error_spam_test.go:93: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for nospam-20220516215858-2444 container: docker volume create nospam-20220516215858-2444 --label name.minikube.sigs.k8s.io=nospam-20220516215858-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: create nospam-20220516215858-2444: error while creating volume root path '/var/lib/docker/volumes/nospam-20220516215858-2444': mkdir /var/lib/docker/volumes/nospam-20220516215858-2444: read-only file system"
error_spam_test.go:93: unexpected stderr: "E0516 22:00:02.402508    7756 network_create.go:104] error while trying to create docker network nospam-20220516215858-2444 192.168.85.0/24: create docker network nospam-20220516215858-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220516215858-2444: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: cannot create network 05fb553e831a610ac4ce74441c5277f09c97d2cb5cc168057d8598e490072524 (br-05fb553e831a): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4"
error_spam_test.go:93: unexpected stderr: "! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220516215858-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220516215858-2444: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: cannot create network 05fb553e831a610ac4ce74441c5277f09c97d2cb5cc168057d8598e490072524 (br-05fb553e831a): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4"
error_spam_test.go:93: unexpected stderr: "* Failed to start docker container. Running \"minikube delete -p nospam-20220516215858-2444\" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220516215858-2444 container: docker volume create nospam-20220516215858-2444 --label name.minikube.sigs.k8s.io=nospam-20220516215858-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: create nospam-20220516215858-2444: error while creating volume root path '/var/lib/docker/volumes/nospam-20220516215858-2444': mkdir /var/lib/docker/volumes/nospam-20220516215858-2444: read-only file system"
error_spam_test.go:93: unexpected stderr: "X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220516215858-2444 container: docker volume create nospam-20220516215858-2444 --label name.minikube.sigs.k8s.io=nospam-20220516215858-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: create nospam-20220516215858-2444: error while creating volume root path '/var/lib/docker/volumes/nospam-20220516215858-2444': mkdir /var/lib/docker/volumes/nospam-20220516215858-2444: read-only file system"
error_spam_test.go:93: unexpected stderr: "* Suggestion: Restart Docker"
error_spam_test.go:93: unexpected stderr: "* Related issue: https://github.com/kubernetes/minikube/issues/6825"
error_spam_test.go:107: minikube stdout:
* [nospam-20220516215858-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
- KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
- MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
- MINIKUBE_LOCATION=12739
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with the root privilege
* Starting control plane node nospam-20220516215858-2444 in cluster nospam-20220516215858-2444
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* docker "nospam-20220516215858-2444" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=2250MB) ...

                                                
                                                

                                                
                                                
error_spam_test.go:108: minikube stderr:
E0516 21:59:15.891426    7756 network_create.go:104] error while trying to create docker network nospam-20220516215858-2444 192.168.76.0/24: create docker network nospam-20220516215858-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220516215858-2444: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 915d52436722c2b7c9b0b3b4676f5739a0caada4aa47560d18fc19d6a804d523 (br-915d52436722): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220516215858-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220516215858-2444: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 915d52436722c2b7c9b0b3b4676f5739a0caada4aa47560d18fc19d6a804d523 (br-915d52436722): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4

                                                
                                                
! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for nospam-20220516215858-2444 container: docker volume create nospam-20220516215858-2444 --label name.minikube.sigs.k8s.io=nospam-20220516215858-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create nospam-20220516215858-2444: error while creating volume root path '/var/lib/docker/volumes/nospam-20220516215858-2444': mkdir /var/lib/docker/volumes/nospam-20220516215858-2444: read-only file system

                                                
                                                
E0516 22:00:02.402508    7756 network_create.go:104] error while trying to create docker network nospam-20220516215858-2444 192.168.85.0/24: create docker network nospam-20220516215858-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220516215858-2444: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 05fb553e831a610ac4ce74441c5277f09c97d2cb5cc168057d8598e490072524 (br-05fb553e831a): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220516215858-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220516215858-2444: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 05fb553e831a610ac4ce74441c5277f09c97d2cb5cc168057d8598e490072524 (br-05fb553e831a): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4

                                                
                                                
* Failed to start docker container. Running "minikube delete -p nospam-20220516215858-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220516215858-2444 container: docker volume create nospam-20220516215858-2444 --label name.minikube.sigs.k8s.io=nospam-20220516215858-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create nospam-20220516215858-2444: error while creating volume root path '/var/lib/docker/volumes/nospam-20220516215858-2444': mkdir /var/lib/docker/volumes/nospam-20220516215858-2444: read-only file system

                                                
                                                
X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220516215858-2444 container: docker volume create nospam-20220516215858-2444 --label name.minikube.sigs.k8s.io=nospam-20220516215858-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create nospam-20220516215858-2444: error while creating volume root path '/var/lib/docker/volumes/nospam-20220516215858-2444': mkdir /var/lib/docker/volumes/nospam-20220516215858-2444: read-only file system

                                                
                                                
* Suggestion: Restart Docker
* Related issue: https://github.com/kubernetes/minikube/issues/6825
error_spam_test.go:118: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:118: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:118: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (77.92s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.05s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220516220221-2444 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
functional_test.go:2160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220516220221-2444 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: exit status 60 (1m17.1351145s)

                                                
                                                
-- stdout --
	* [functional-20220516220221-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node functional-20220516220221-2444 in cluster functional-20220516220221-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20220516220221-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:54066 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:54066 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:54066 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:54066 to docker env.
	E0516 22:02:39.045605    5640 network_create.go:104] error while trying to create docker network functional-20220516220221-2444 192.168.76.0/24: create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4952536fcfff5773a4415cab1c3ced551564e7561954df9714ea7f4bd459af7d (br-4952536fcfff): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4952536fcfff5773a4415cab1c3ced551564e7561954df9714ea7f4bd459af7d (br-4952536fcfff): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	
	! Local proxy ignored: not passing HTTP_PROXY=localhost:54066 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:54066 to docker env.
	E0516 22:03:25.211246    5640 network_create.go:104] error while trying to create docker network functional-20220516220221-2444 192.168.85.0/24: create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ba3e84b12e1eeb37e98a36d736612197d0f254049b25606c63a46427aef25680 (br-ba3e84b12e1e): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ba3e84b12e1eeb37e98a36d736612197d0f254049b25606c63a46427aef25680 (br-ba3e84b12e1e): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p functional-20220516220221-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
functional_test.go:2162: failed minikube start. args "out/minikube-windows-amd64.exe start -p functional-20220516220221-2444 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker": exit status 60
functional_test.go:2167: start stdout=* [functional-20220516220221-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
- KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
- MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
- MINIKUBE_LOCATION=12739
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with the root privilege
* Starting control plane node functional-20220516220221-2444 in cluster functional-20220516220221-2444
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=4000MB) ...
* docker "functional-20220516220221-2444" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=4000MB) ...

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2172: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:54066 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:54066 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:54066 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:54066 to docker env.
E0516 22:02:39.045605    5640 network_create.go:104] error while trying to create docker network functional-20220516220221-2444 192.168.76.0/24: create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 4952536fcfff5773a4415cab1c3ced551564e7561954df9714ea7f4bd459af7d (br-4952536fcfff): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 4952536fcfff5773a4415cab1c3ced551564e7561954df9714ea7f4bd459af7d (br-4952536fcfff): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4

                                                
                                                
! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system

                                                
                                                
! Local proxy ignored: not passing HTTP_PROXY=localhost:54066 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:54066 to docker env.
E0516 22:03:25.211246    5640 network_create.go:104] error while trying to create docker network functional-20220516220221-2444 192.168.85.0/24: create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network ba3e84b12e1eeb37e98a36d736612197d0f254049b25606c63a46427aef25680 (br-ba3e84b12e1e): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network ba3e84b12e1eeb37e98a36d736612197d0f254049b25606c63a46427aef25680 (br-ba3e84b12e1e): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4

                                                
                                                
* Failed to start docker container. Running "minikube delete -p functional-20220516220221-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system

                                                
                                                
X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system

                                                
                                                
* Suggestion: Restart Docker
* Related issue: https://github.com/kubernetes/minikube/issues/6825
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.0963759s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (2.7943141s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:03:42.472745    1524 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/StartWithProxy (81.05s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
functional_test.go:630: audit.json does not contain the profile "functional-20220516220221-2444"
--- FAIL: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (116.77s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220516220221-2444 --alsologtostderr -v=8
functional_test.go:651: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220516220221-2444 --alsologtostderr -v=8: exit status 60 (1m52.6582758s)

                                                
                                                
-- stdout --
	* [functional-20220516220221-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20220516220221-2444 in cluster functional-20220516220221-2444
	* Pulling base image ...
	* docker "functional-20220516220221-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20220516220221-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:03:42.734907    6756 out.go:296] Setting OutFile to fd 664 ...
	I0516 22:03:42.799820    6756 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:03:42.799820    6756 out.go:309] Setting ErrFile to fd 636...
	I0516 22:03:42.799820    6756 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:03:42.812059    6756 out.go:303] Setting JSON to false
	I0516 22:03:42.814609    6756 start.go:115] hostinfo: {"hostname":"minikube2","uptime":1735,"bootTime":1652736887,"procs":147,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:03:42.814609    6756 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:03:42.819313    6756 out.go:177] * [functional-20220516220221-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:03:42.822504    6756 notify.go:193] Checking for updates...
	I0516 22:03:42.825094    6756 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:03:42.827390    6756 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:03:42.837909    6756 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:03:42.840312    6756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:03:42.843538    6756 config.go:178] Loaded profile config "functional-20220516220221-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:03:42.843538    6756 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:03:45.365164    6756 docker.go:137] docker version: linux-20.10.14
	I0516 22:03:45.375689    6756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:03:47.333828    6756 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9544899s)
	I0516 22:03:47.334563    6756 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-16 22:03:46.3260952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:03:47.341017    6756 out.go:177] * Using the docker driver based on existing profile
	I0516 22:03:47.343599    6756 start.go:284] selected driver: docker
	I0516 22:03:47.343633    6756 start.go:806] validating driver "docker" against &{Name:functional-20220516220221-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220516220221-2444 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false}
	I0516 22:03:47.343806    6756 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:03:47.365056    6756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:03:49.392744    6756 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.027625s)
	I0516 22:03:49.393318    6756 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-16 22:03:48.3319493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:03:49.451869    6756 cni.go:95] Creating CNI manager for ""
	I0516 22:03:49.451895    6756 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 22:03:49.451993    6756 start_flags.go:306] config:
	{Name:functional-20220516220221-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220516220221-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:03:49.456331    6756 out.go:177] * Starting control plane node functional-20220516220221-2444 in cluster functional-20220516220221-2444
	I0516 22:03:49.458988    6756 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:03:49.460989    6756 out.go:177] * Pulling base image ...
	I0516 22:03:49.462690    6756 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 22:03:49.462690    6756 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:03:49.462690    6756 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 22:03:49.462690    6756 cache.go:57] Caching tarball of preloaded images
	I0516 22:03:49.462690    6756 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 22:03:49.462690    6756 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 22:03:49.465730    6756 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20220516220221-2444\config.json ...
	I0516 22:03:50.506742    6756 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:03:50.506852    6756 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:03:50.507234    6756 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:03:50.507312    6756 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:03:50.507380    6756 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:03:50.507380    6756 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:03:50.507380    6756 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:03:50.507380    6756 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:03:50.507380    6756 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:03:52.673059    6756 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-629445057: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-629445057: read-only file system"}
	I0516 22:03:52.673197    6756 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:03:52.673277    6756 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:03:52.673277    6756 start.go:352] acquiring machines lock for functional-20220516220221-2444: {Name:mkdcc2ea8456bfc6c4e9b4af97ac214783a7ee2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:03:52.673277    6756 start.go:356] acquired machines lock for "functional-20220516220221-2444" in 0s
	I0516 22:03:52.674169    6756 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:03:52.674169    6756 fix.go:55] fixHost starting: 
	I0516 22:03:52.693076    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:03:53.710907    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:03:53.711014    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0174177s)
	I0516 22:03:53.711014    6756 fix.go:103] recreateIfNeeded on functional-20220516220221-2444: state= err=unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:03:53.711014    6756 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:03:53.717143    6756 out.go:177] * docker "functional-20220516220221-2444" container is missing, will recreate.
	I0516 22:03:53.719417    6756 delete.go:124] DEMOLISHING functional-20220516220221-2444 ...
	I0516 22:03:53.732590    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:03:54.744934    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:03:54.745091    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0120937s)
	W0516 22:03:54.745169    6756 stop.go:75] unable to get state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:03:54.745235    6756 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:03:54.759747    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:03:55.805762    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:03:55.805762    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0458148s)
	I0516 22:03:55.805959    6756 delete.go:82] Unable to get host status for functional-20220516220221-2444, assuming it has already been deleted: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:03:55.815070    6756 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220516220221-2444
	W0516 22:03:56.836417    6756 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220516220221-2444 returned with exit code 1
	I0516 22:03:56.836523    6756 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220516220221-2444: (1.0211377s)
	I0516 22:03:56.836623    6756 kic.go:356] could not find the container functional-20220516220221-2444 to remove it. will try anyways
	I0516 22:03:56.845392    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:03:57.845938    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:03:57.846003    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.000435s)
	W0516 22:03:57.846059    6756 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:03:57.854545    6756 cli_runner.go:164] Run: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0"
	W0516 22:03:58.895223    6756 cli_runner.go:211] docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:03:58.895288    6756 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0": (1.0404605s)
	I0516 22:03:58.895460    6756 oci.go:641] error shutdown functional-20220516220221-2444: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:03:59.917810    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:04:00.930591    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:04:00.930767    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0125427s)
	I0516 22:04:00.930883    6756 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:00.930883    6756 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:04:00.930883    6756 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:01.494125    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:04:02.518442    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:04:02.518488    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0242519s)
	I0516 22:04:02.518584    6756 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:02.518584    6756 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:04:02.518584    6756 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:03.611187    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:04:04.641113    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:04:04.641298    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0297463s)
	I0516 22:04:04.641391    6756 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:04.641562    6756 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:04:04.641628    6756 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:05.971017    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:04:06.980547    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:04:06.980828    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.009465s)
	I0516 22:04:06.980908    6756 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:06.980952    6756 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:04:06.981021    6756 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:08.591520    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:04:09.600052    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:04:09.600106    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0084623s)
	I0516 22:04:09.600392    6756 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:09.600443    6756 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:04:09.600443    6756 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:11.954635    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:04:12.944916    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:04:12.945186    6756 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:12.945186    6756 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:04:12.945186    6756 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:17.473766    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:04:18.478191    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:04:18.478191    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0041299s)
	I0516 22:04:18.478191    6756 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:18.478191    6756 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:04:18.478191    6756 oci.go:88] couldn't shut down functional-20220516220221-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	 
	I0516 22:04:18.486521    6756 cli_runner.go:164] Run: docker rm -f -v functional-20220516220221-2444
	I0516 22:04:19.518941    6756 cli_runner.go:217] Completed: docker rm -f -v functional-20220516220221-2444: (1.0322798s)
	I0516 22:04:19.527516    6756 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220516220221-2444
	W0516 22:04:20.549817    6756 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220516220221-2444 returned with exit code 1
	I0516 22:04:20.549849    6756 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220516220221-2444: (1.0220091s)
	I0516 22:04:20.559329    6756 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:04:21.568489    6756 cli_runner.go:211] docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:04:21.568554    6756 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0087875s)
	I0516 22:04:21.578852    6756 network_create.go:272] running [docker network inspect functional-20220516220221-2444] to gather additional debugging logs...
	I0516 22:04:21.578852    6756 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444
	W0516 22:04:22.607476    6756 cli_runner.go:211] docker network inspect functional-20220516220221-2444 returned with exit code 1
	I0516 22:04:22.607842    6756 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444: (1.028619s)
	I0516 22:04:22.607882    6756 network_create.go:275] error running [docker network inspect functional-20220516220221-2444]: docker network inspect functional-20220516220221-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220516220221-2444
	I0516 22:04:22.607910    6756 network_create.go:277] output of [docker network inspect functional-20220516220221-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220516220221-2444
	
	** /stderr **
	W0516 22:04:22.608772    6756 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:04:22.608772    6756 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:04:23.627221    6756 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:04:23.631366    6756 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0516 22:04:23.631366    6756 start.go:165] libmachine.API.Create for "functional-20220516220221-2444" (driver="docker")
	I0516 22:04:23.631366    6756 client.go:168] LocalClient.Create starting
	I0516 22:04:23.632190    6756 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:04:23.632190    6756 main.go:134] libmachine: Decoding PEM data...
	I0516 22:04:23.632190    6756 main.go:134] libmachine: Parsing certificate...
	I0516 22:04:23.632190    6756 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:04:23.632922    6756 main.go:134] libmachine: Decoding PEM data...
	I0516 22:04:23.632922    6756 main.go:134] libmachine: Parsing certificate...
	I0516 22:04:23.640598    6756 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:04:24.674195    6756 cli_runner.go:211] docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:04:24.674195    6756 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0335926s)
	I0516 22:04:24.681734    6756 network_create.go:272] running [docker network inspect functional-20220516220221-2444] to gather additional debugging logs...
	I0516 22:04:24.681734    6756 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444
	W0516 22:04:25.707286    6756 cli_runner.go:211] docker network inspect functional-20220516220221-2444 returned with exit code 1
	I0516 22:04:25.707286    6756 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444: (1.0253969s)
	I0516 22:04:25.707286    6756 network_create.go:275] error running [docker network inspect functional-20220516220221-2444]: docker network inspect functional-20220516220221-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220516220221-2444
	I0516 22:04:25.707286    6756 network_create.go:277] output of [docker network inspect functional-20220516220221-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220516220221-2444
	
	** /stderr **
	I0516 22:04:25.715484    6756 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:04:26.747283    6756 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0315461s)
	I0516 22:04:26.769861    6756 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000724778] misses:0}
	I0516 22:04:26.769861    6756 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:04:26.772052    6756 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:04:26.781090    6756 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
	W0516 22:04:27.776160    6756 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
	W0516 22:04:27.776160    6756 network_create.go:107] failed to create docker network functional-20220516220221-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:04:27.793311    6756 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000724778] amended:false}} dirty:map[] misses:0}
	I0516 22:04:27.793311    6756 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:04:27.807242    6756 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000724778] amended:true}} dirty:map[192.168.49.0:0xc000724778 192.168.58.0:0xc00060e458] misses:0}
	I0516 22:04:27.809935    6756 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:04:27.809935    6756 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:04:27.819301    6756 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
	W0516 22:04:28.846702    6756 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
	I0516 22:04:28.846702    6756 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0273772s)
	W0516 22:04:28.846702    6756 network_create.go:107] failed to create docker network functional-20220516220221-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:04:28.865702    6756 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000724778] amended:true}} dirty:map[192.168.49.0:0xc000724778 192.168.58.0:0xc00060e458] misses:1}
	I0516 22:04:28.866268    6756 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:04:28.882652    6756 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000724778] amended:true}} dirty:map[192.168.49.0:0xc000724778 192.168.58.0:0xc00060e458 192.168.67.0:0xc0006c82c8] misses:1}
	I0516 22:04:28.882652    6756 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:04:28.882652    6756 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:04:28.891723    6756 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
	W0516 22:04:29.895046    6756 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
	I0516 22:04:29.895265    6756 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0032504s)
	W0516 22:04:29.895302    6756 network_create.go:107] failed to create docker network functional-20220516220221-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:04:29.911519    6756 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000724778] amended:true}} dirty:map[192.168.49.0:0xc000724778 192.168.58.0:0xc00060e458 192.168.67.0:0xc0006c82c8] misses:2}
	I0516 22:04:29.911519    6756 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:04:29.923718    6756 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000724778] amended:true}} dirty:map[192.168.49.0:0xc000724778 192.168.58.0:0xc00060e458 192.168.67.0:0xc0006c82c8 192.168.76.0:0xc000724878] misses:2}
	I0516 22:04:29.923718    6756 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:04:29.923718    6756 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:04:29.933710    6756 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
	W0516 22:04:30.981483    6756 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
	I0516 22:04:30.981483    6756 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0474866s)
	E0516 22:04:30.981483    6756 network_create.go:104] error while trying to create docker network functional-20220516220221-2444 192.168.76.0/24: create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ff94abec8b0a639bf60bc38bfefcc87d54f1c2e2166551d42df5cb1c4035b6c8 (br-ff94abec8b0a): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:04:30.981483    6756 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ff94abec8b0a639bf60bc38bfefcc87d54f1c2e2166551d42df5cb1c4035b6c8 (br-ff94abec8b0a): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ff94abec8b0a639bf60bc38bfefcc87d54f1c2e2166551d42df5cb1c4035b6c8 (br-ff94abec8b0a): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:04:30.998866    6756 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:04:32.018465    6756 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0195433s)
	I0516 22:04:32.026319    6756 cli_runner.go:164] Run: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:04:33.121286    6756 cli_runner.go:211] docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:04:33.121459    6756 cli_runner.go:217] Completed: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: (1.094916s)
	I0516 22:04:33.121532    6756 client.go:171] LocalClient.Create took 9.4901235s
	I0516 22:04:35.147741    6756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:04:35.149980    6756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:04:36.189388    6756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:04:36.189408    6756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0392352s)
	I0516 22:04:36.189592    6756 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:36.363567    6756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:04:37.395303    6756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:04:37.395390    6756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0315329s)
	W0516 22:04:37.395590    6756 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:04:37.395650    6756 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:37.407561    6756 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:04:37.413730    6756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:04:38.449223    6756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:04:38.449290    6756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0352541s)
	I0516 22:04:38.449290    6756 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:38.667396    6756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:04:39.696679    6756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:04:39.696770    6756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0289164s)
	W0516 22:04:39.696770    6756 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:04:39.696770    6756 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:39.696770    6756 start.go:134] duration metric: createHost completed in 16.0692008s
	I0516 22:04:39.708332    6756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:04:39.716050    6756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:04:40.751267    6756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:04:40.751302    6756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0350846s)
	I0516 22:04:40.751790    6756 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:41.110080    6756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:04:42.138592    6756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:04:42.138944    6756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.028507s)
	W0516 22:04:42.139129    6756 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:04:42.139158    6756 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:42.150239    6756 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:04:42.157087    6756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:04:43.157868    6756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:04:43.157893    6756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0005304s)
	I0516 22:04:43.158032    6756 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:43.392647    6756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:04:44.414800    6756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:04:44.414909    6756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.021947s)
	W0516 22:04:44.414909    6756 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:04:44.414909    6756 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:44.414909    6756 fix.go:57] fixHost completed within 51.7405141s
	I0516 22:04:44.414909    6756 start.go:81] releasing machines lock for "functional-20220516220221-2444", held for 51.7414062s
	W0516 22:04:44.414909    6756 start.go:608] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	W0516 22:04:44.415764    6756 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	
	I0516 22:04:44.415826    6756 start.go:623] Will try again in 5 seconds ...
	I0516 22:04:49.425893    6756 start.go:352] acquiring machines lock for functional-20220516220221-2444: {Name:mkdcc2ea8456bfc6c4e9b4af97ac214783a7ee2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:04:49.426154    6756 start.go:356] acquired machines lock for "functional-20220516220221-2444" in 221.7µs
	I0516 22:04:49.426355    6756 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:04:49.426355    6756 fix.go:55] fixHost starting: 
	I0516 22:04:49.450198    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:04:50.482274    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:04:50.482274    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0318634s)
	I0516 22:04:50.482515    6756 fix.go:103] recreateIfNeeded on functional-20220516220221-2444: state= err=unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:50.482515    6756 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:04:50.486717    6756 out.go:177] * docker "functional-20220516220221-2444" container is missing, will recreate.
	I0516 22:04:50.489336    6756 delete.go:124] DEMOLISHING functional-20220516220221-2444 ...
	I0516 22:04:50.504217    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:04:51.524265    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:04:51.524416    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0197726s)
	W0516 22:04:51.524472    6756 stop.go:75] unable to get state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:51.524526    6756 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:51.538058    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:04:52.569161    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:04:52.569191    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0310408s)
	I0516 22:04:52.569374    6756 delete.go:82] Unable to get host status for functional-20220516220221-2444, assuming it has already been deleted: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:52.578342    6756 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220516220221-2444
	W0516 22:04:53.595457    6756 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220516220221-2444 returned with exit code 1
	I0516 22:04:53.595530    6756 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220516220221-2444: (1.0170293s)
	I0516 22:04:53.595530    6756 kic.go:356] could not find the container functional-20220516220221-2444 to remove it. will try anyways
	I0516 22:04:53.604102    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:04:54.622133    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:04:54.622133    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0180266s)
	W0516 22:04:54.622133    6756 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:54.631126    6756 cli_runner.go:164] Run: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0"
	W0516 22:04:55.680573    6756 cli_runner.go:211] docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:04:55.680724    6756 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0": (1.0494213s)
	I0516 22:04:55.680774    6756 oci.go:641] error shutdown functional-20220516220221-2444: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:56.698909    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:04:57.715181    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:04:57.715383    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0159956s)
	I0516 22:04:57.715430    6756 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:57.715471    6756 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:04:57.715471    6756 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:58.217704    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:04:59.237943    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:04:59.238152    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0199189s)
	I0516 22:04:59.238240    6756 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:59.238240    6756 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:04:59.238295    6756 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:04:59.841812    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:05:00.836536    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:05:00.836802    6756 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:00.836802    6756 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:05:00.836802    6756 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:01.751274    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:05:02.771259    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:05:02.771259    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0196913s)
	I0516 22:05:02.771259    6756 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:02.771259    6756 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:05:02.771259    6756 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:04.771429    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:05:05.783782    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:05:05.783884    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0121427s)
	I0516 22:05:05.783946    6756 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:05.783985    6756 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:05:05.784016    6756 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:07.622375    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:05:08.620149    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:05:08.620273    6756 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:08.620309    6756 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:05:08.620334    6756 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:11.320607    6756 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:05:12.327353    6756 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:05:12.327560    6756 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0066687s)
	I0516 22:05:12.327644    6756 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:12.327694    6756 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:05:12.327694    6756 oci.go:88] couldn't shut down functional-20220516220221-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	 
	I0516 22:05:12.338260    6756 cli_runner.go:164] Run: docker rm -f -v functional-20220516220221-2444
	I0516 22:05:13.346388    6756 cli_runner.go:217] Completed: docker rm -f -v functional-20220516220221-2444: (1.0078745s)
	I0516 22:05:13.355801    6756 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220516220221-2444
	W0516 22:05:14.368120    6756 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220516220221-2444 returned with exit code 1
	I0516 22:05:14.368264    6756 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220516220221-2444: (1.0123149s)
	I0516 22:05:14.376778    6756 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:05:15.383480    6756 cli_runner.go:211] docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:05:15.383578    6756 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0064606s)
	I0516 22:05:15.392571    6756 network_create.go:272] running [docker network inspect functional-20220516220221-2444] to gather additional debugging logs...
	I0516 22:05:15.392571    6756 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444
	W0516 22:05:16.410163    6756 cli_runner.go:211] docker network inspect functional-20220516220221-2444 returned with exit code 1
	I0516 22:05:16.410191    6756 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444: (1.017443s)
	I0516 22:05:16.410256    6756 network_create.go:275] error running [docker network inspect functional-20220516220221-2444]: docker network inspect functional-20220516220221-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220516220221-2444
	I0516 22:05:16.410256    6756 network_create.go:277] output of [docker network inspect functional-20220516220221-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220516220221-2444
	
	** /stderr **
	W0516 22:05:16.411173    6756 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:05:16.411173    6756 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:05:17.411782    6756 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:05:17.418582    6756 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0516 22:05:17.418582    6756 start.go:165] libmachine.API.Create for "functional-20220516220221-2444" (driver="docker")
	I0516 22:05:17.418582    6756 client.go:168] LocalClient.Create starting
	I0516 22:05:17.419372    6756 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:05:17.419372    6756 main.go:134] libmachine: Decoding PEM data...
	I0516 22:05:17.419372    6756 main.go:134] libmachine: Parsing certificate...
	I0516 22:05:17.419372    6756 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:05:17.419372    6756 main.go:134] libmachine: Decoding PEM data...
	I0516 22:05:17.419372    6756 main.go:134] libmachine: Parsing certificate...
	I0516 22:05:17.425671    6756 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:05:18.440590    6756 cli_runner.go:211] docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:05:18.440590    6756 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0146934s)
	I0516 22:05:18.450380    6756 network_create.go:272] running [docker network inspect functional-20220516220221-2444] to gather additional debugging logs...
	I0516 22:05:18.450380    6756 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444
	W0516 22:05:19.468762    6756 cli_runner.go:211] docker network inspect functional-20220516220221-2444 returned with exit code 1
	I0516 22:05:19.468794    6756 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444: (1.0181683s)
	I0516 22:05:19.468903    6756 network_create.go:275] error running [docker network inspect functional-20220516220221-2444]: docker network inspect functional-20220516220221-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220516220221-2444
	I0516 22:05:19.468940    6756 network_create.go:277] output of [docker network inspect functional-20220516220221-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220516220221-2444
	
	** /stderr **
	I0516 22:05:19.477801    6756 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:05:20.478314    6756 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0005084s)
	I0516 22:05:20.494680    6756 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000724778] amended:true}} dirty:map[192.168.49.0:0xc000724778 192.168.58.0:0xc00060e458 192.168.67.0:0xc0006c82c8 192.168.76.0:0xc000724878] misses:2}
	I0516 22:05:20.494680    6756 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:05:20.509558    6756 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000724778] amended:true}} dirty:map[192.168.49.0:0xc000724778 192.168.58.0:0xc00060e458 192.168.67.0:0xc0006c82c8 192.168.76.0:0xc000724878] misses:3}
	I0516 22:05:20.509558    6756 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:05:20.524563    6756 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000724778 192.168.58.0:0xc00060e458 192.168.67.0:0xc0006c82c8 192.168.76.0:0xc000724878] amended:false}} dirty:map[] misses:0}
	I0516 22:05:20.524563    6756 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:05:20.536911    6756 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000724778 192.168.58.0:0xc00060e458 192.168.67.0:0xc0006c82c8 192.168.76.0:0xc000724878] amended:false}} dirty:map[] misses:0}
	I0516 22:05:20.536911    6756 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:05:20.551335    6756 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000724778 192.168.58.0:0xc00060e458 192.168.67.0:0xc0006c82c8 192.168.76.0:0xc000724878] amended:true}} dirty:map[192.168.49.0:0xc000724778 192.168.58.0:0xc00060e458 192.168.67.0:0xc0006c82c8 192.168.76.0:0xc000724878 192.168.85.0:0xc000724c78] misses:0}
	I0516 22:05:20.551335    6756 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:05:20.551335    6756 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:05:20.560728    6756 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
	W0516 22:05:21.560577    6756 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
	E0516 22:05:21.560768    6756 network_create.go:104] error while trying to create docker network functional-20220516220221-2444 192.168.85.0/24: create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 84d239b3333923ce044340d4029294921e97e426a449182766b389ce58d2dd6f (br-84d239b33339): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:05:21.561088    6756 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 84d239b3333923ce044340d4029294921e97e426a449182766b389ce58d2dd6f (br-84d239b33339): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 84d239b3333923ce044340d4029294921e97e426a449182766b389ce58d2dd6f (br-84d239b33339): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:05:21.575098    6756 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:05:22.618642    6756 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0433974s)
	I0516 22:05:22.626908    6756 cli_runner.go:164] Run: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:05:23.673912    6756 cli_runner.go:211] docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:05:23.674113    6756 cli_runner.go:217] Completed: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: (1.046999s)
	I0516 22:05:23.674180    6756 client.go:171] LocalClient.Create took 6.2555691s
	I0516 22:05:25.701323    6756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:05:25.706113    6756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:05:26.767092    6756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:05:26.767288    6756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.060786s)
	I0516 22:05:26.767534    6756 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:27.063035    6756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:05:28.079786    6756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:05:28.079861    6756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0167464s)
	W0516 22:05:28.080083    6756 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:05:28.080112    6756 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:28.090152    6756 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:05:28.098027    6756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:05:29.116571    6756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:05:29.116656    6756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0185019s)
	I0516 22:05:29.116813    6756 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:29.335846    6756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:05:30.335631    6756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	W0516 22:05:30.335690    6756 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:05:30.335690    6756 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:30.335690    6756 start.go:134] duration metric: createHost completed in 12.9237781s
	I0516 22:05:30.346293    6756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:05:30.351157    6756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:05:31.365154    6756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:05:31.365404    6756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0139927s)
	I0516 22:05:31.365476    6756 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:31.703526    6756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:05:32.724356    6756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:05:32.724447    6756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0207926s)
	W0516 22:05:32.724516    6756 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:05:32.724516    6756 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:32.739020    6756 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:05:32.745273    6756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:05:33.763475    6756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:05:33.763528    6756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0181298s)
	I0516 22:05:33.763624    6756 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:34.123887    6756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:05:35.120853    6756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	W0516 22:05:35.121020    6756 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:05:35.121020    6756 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:05:35.121020    6756 fix.go:57] fixHost completed within 45.6944569s
	I0516 22:05:35.121020    6756 start.go:81] releasing machines lock for "functional-20220516220221-2444", held for 45.6945962s
	W0516 22:05:35.121771    6756 out.go:239] * Failed to start docker container. Running "minikube delete -p functional-20220516220221-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p functional-20220516220221-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	
	I0516 22:05:35.133200    6756 out.go:177] 
	W0516 22:05:35.134903    6756 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	
	W0516 22:05:35.134903    6756 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 22:05:35.134903    6756 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 22:05:35.141322    6756 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:653: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-20220516220221-2444 --alsologtostderr -v=8": exit status 60
functional_test.go:655: soft start took 1m52.8679348s for "functional-20220516220221-2444" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/SoftStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.0979713s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (2.7930497s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:05:39.254593    6728 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/SoftStart (116.77s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (4.17s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
functional_test.go:673: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (326.9944ms)

                                                
                                                
** stderr ** 
	W0516 22:05:39.537656    9140 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:675: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:679: expected current-context = "functional-20220516220221-2444", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubeContext]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.073092s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (2.7536142s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:05:43.428978    7224 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/KubeContext (4.17s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220516220221-2444 get po -A
functional_test.go:688: (dbg) Non-zero exit: kubectl --context functional-20220516220221-2444 get po -A: exit status 1 (278.2615ms)

                                                
                                                
** stderr ** 
	W0516 22:05:43.671200    7600 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220516220221-2444
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:690: failed to get kubectl pods: args "kubectl --context functional-20220516220221-2444 get po -A" : exit status 1
functional_test.go:694: expected stderr to be empty but got *"W0516 22:05:43.671200    7600 loader.go:223] Config not found: C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig\nError in configuration: \n* context was not found for specified context: functional-20220516220221-2444\n* cluster has no server defined\n"*: args "kubectl --context functional-20220516220221-2444 get po -A"
functional_test.go:697: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-20220516220221-2444 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.0672851s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (2.8147511s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:05:47.604892    8328 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/KubectlGetPods (4.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (3.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh sudo crictl images
functional_test.go:1116: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh sudo crictl images: exit status 80 (3.0416048s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f40552ee918ac053c4c404bc1ee7532c196ce64c_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1118: failed to get images by "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh sudo crictl images" ssh exit status 80
functional_test.go:1122: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f40552ee918ac053c4c404bc1ee7532c196ce64c_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (3.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (11.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1139: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh sudo docker rmi k8s.gcr.io/pause:latest: exit status 80 (3.040074s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_695159ccd5e0da3f5d811f2823eb9163b9dc65a6_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1142: failed to manually delete image "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh sudo docker rmi k8s.gcr.io/pause:latest" : exit status 80
functional_test.go:1145: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 80 (3.0060865s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_90c12c9ea894b73e3971aa1ec67d0a7aeefe0b8f_2.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 cache reload: (2.9024957s)
functional_test.go:1155: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1155: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 80 (3.0270233s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_90c12c9ea894b73e3971aa1ec67d0a7aeefe0b8f_2.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1157: expected "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh sudo crictl inspecti k8s.gcr.io/pause:latest" to run successfully but got error: exit status 80
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (11.98s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (5.89s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 kubectl -- --context functional-20220516220221-2444 get pods
functional_test.go:708: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 kubectl -- --context functional-20220516220221-2444 get pods: exit status 1 (2.0052381s)

                                                
                                                
** stderr ** 
	W0516 22:06:21.900832    7908 loader.go:221] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220516220221-2444
	* no server found for cluster "functional-20220516220221-2444"

                                                
                                                
** /stderr **
functional_test.go:711: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 kubectl -- --context functional-20220516220221-2444 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.1188303s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (2.7509957s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:06:25.864520    6816 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (5.89s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (5.88s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out\kubectl.exe --context functional-20220516220221-2444 get pods
functional_test.go:733: (dbg) Non-zero exit: out\kubectl.exe --context functional-20220516220221-2444 get pods: exit status 1 (1.952118s)

                                                
                                                
** stderr ** 
	W0516 22:06:27.750266     300 loader.go:221] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220516220221-2444
	* no server found for cluster "functional-20220516220221-2444"

                                                
                                                
** /stderr **
functional_test.go:736: failed to run kubectl directly. args "out\\kubectl.exe --context functional-20220516220221-2444 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.0970843s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (2.8111285s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:06:31.742011    5628 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (5.88s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (116.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220516220221-2444 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220516220221-2444 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 60 (1m53.079419s)

                                                
                                                
-- stdout --
	* [functional-20220516220221-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20220516220221-2444 in cluster functional-20220516220221-2444
	* Pulling base image ...
	* docker "functional-20220516220221-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20220516220221-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:07:20.461927    5852 network_create.go:104] error while trying to create docker network functional-20220516220221-2444 192.168.76.0/24: create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2b639637f074ced5bf54082ee5531d87dde24e32bb4e4786e00fd679a5ce6f04 (br-2b639637f074): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2b639637f074ced5bf54082ee5531d87dde24e32bb4e4786e00fd679a5ce6f04 (br-2b639637f074): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	
	E0516 22:08:11.307164    5852 network_create.go:104] error while trying to create docker network functional-20220516220221-2444 192.168.85.0/24: create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 560a152dd5affb037c695a1ddfa127aa50d1a7210a7b7635805929face070e7a (br-560a152dd5af): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 560a152dd5affb037c695a1ddfa127aa50d1a7210a7b7635805929face070e7a (br-560a152dd5af): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p functional-20220516220221-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
functional_test.go:751: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-20220516220221-2444 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 60
functional_test.go:753: restart took 1m53.0800168s for "functional-20220516220221-2444" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.0636257s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (2.7339027s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:08:28.629796    6364 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/ExtraConfig (116.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220516220221-2444 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:802: (dbg) Non-zero exit: kubectl --context functional-20220516220221-2444 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (309.8455ms)

                                                
                                                
** stderr ** 
	W0516 22:08:28.892913    4132 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20220516220221-2444" does not exist

                                                
                                                
** /stderr **
functional_test.go:804: failed to get components. args "kubectl --context functional-20220516220221-2444 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.116129s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (2.7832968s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:08:32.857762     908 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/ComponentHealth (4.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 logs
functional_test.go:1228: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 logs: exit status 80 (3.0732815s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                Args                 |               Profile               |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
	| delete  | --all                               | download-only-20220516215532-2444   | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:56 GMT | 16 May 22 21:56 GMT |
	| delete  | -p                                  | download-only-20220516215532-2444   | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:56 GMT | 16 May 22 21:56 GMT |
	|         | download-only-20220516215532-2444   |                                     |                   |                |                     |                     |
	| delete  | -p                                  | download-only-20220516215532-2444   | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:56 GMT | 16 May 22 21:56 GMT |
	|         | download-only-20220516215532-2444   |                                     |                   |                |                     |                     |
	| delete  | -p                                  | download-docker-20220516215629-2444 | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:57 GMT | 16 May 22 21:57 GMT |
	|         | download-docker-20220516215629-2444 |                                     |                   |                |                     |                     |
	| delete  | -p                                  | binary-mirror-20220516215715-2444   | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:57 GMT | 16 May 22 21:57 GMT |
	|         | binary-mirror-20220516215715-2444   |                                     |                   |                |                     |                     |
	| delete  | -p addons-20220516215732-2444       | addons-20220516215732-2444          | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:58 GMT | 16 May 22 21:58 GMT |
	| delete  | -p nospam-20220516215858-2444       | nospam-20220516215858-2444          | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:02 GMT | 16 May 22 22:02 GMT |
	| cache   | functional-20220516220221-2444      | functional-20220516220221-2444      | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:05 GMT | 16 May 22 22:05 GMT |
	|         | cache add k8s.gcr.io/pause:3.1      |                                     |                   |                |                     |                     |
	| cache   | functional-20220516220221-2444      | functional-20220516220221-2444      | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:05 GMT | 16 May 22 22:05 GMT |
	|         | cache add k8s.gcr.io/pause:3.3      |                                     |                   |                |                     |                     |
	| cache   | functional-20220516220221-2444      | functional-20220516220221-2444      | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:05 GMT | 16 May 22 22:05 GMT |
	|         | cache add                           |                                     |                   |                |                     |                     |
	|         | k8s.gcr.io/pause:latest             |                                     |                   |                |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.3         | minikube                            | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
	| cache   | list                                | minikube                            | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
	| cache   | functional-20220516220221-2444      | functional-20220516220221-2444      | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
	|         | cache reload                        |                                     |                   |                |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.1         | minikube                            | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
	| cache   | delete k8s.gcr.io/pause:latest      | minikube                            | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
	|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/16 22:06:32
	Running on machine: minikube2
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0516 22:06:31.999388    5852 out.go:296] Setting OutFile to fd 776 ...
	I0516 22:06:32.057074    5852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:06:32.057074    5852 out.go:309] Setting ErrFile to fd 972...
	I0516 22:06:32.057074    5852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:06:32.067765    5852 out.go:303] Setting JSON to false
	I0516 22:06:32.070088    5852 start.go:115] hostinfo: {"hostname":"minikube2","uptime":1904,"bootTime":1652736888,"procs":147,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:06:32.070088    5852 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:06:32.074746    5852 out.go:177] * [functional-20220516220221-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:06:32.079203    5852 notify.go:193] Checking for updates...
	I0516 22:06:32.081874    5852 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:06:32.084298    5852 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:06:32.086659    5852 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:06:32.088941    5852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:06:32.091576    5852 config.go:178] Loaded profile config "functional-20220516220221-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:06:32.091576    5852 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:06:34.645226    5852 docker.go:137] docker version: linux-20.10.14
	I0516 22:06:34.654010    5852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:06:36.656610    5852 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0025907s)
	I0516 22:06:36.657377    5852 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:45 SystemTime:2022-05-16 22:06:35.6434948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:06:36.662829    5852 out.go:177] * Using the docker driver based on existing profile
	I0516 22:06:36.666827    5852 start.go:284] selected driver: docker
	I0516 22:06:36.666827    5852 start.go:806] validating driver "docker" against &{Name:functional-20220516220221-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220516220221-2444 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false}
	I0516 22:06:36.666827    5852 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:06:36.685847    5852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:06:38.688529    5852 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.002673s)
	I0516 22:06:38.688529    5852 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:45 SystemTime:2022-05-16 22:06:37.6703272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:06:38.749106    5852 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 22:06:38.749106    5852 cni.go:95] Creating CNI manager for ""
	I0516 22:06:38.749106    5852 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 22:06:38.749106    5852 start_flags.go:306] config:
	{Name:functional-20220516220221-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220516220221-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false}
	I0516 22:06:38.755744    5852 out.go:177] * Starting control plane node functional-20220516220221-2444 in cluster functional-20220516220221-2444
	I0516 22:06:38.757562    5852 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:06:38.760512    5852 out.go:177] * Pulling base image ...
	I0516 22:06:38.763342    5852 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 22:06:38.764354    5852 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:06:38.764354    5852 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 22:06:38.764562    5852 cache.go:57] Caching tarball of preloaded images
	I0516 22:06:38.765091    5852 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 22:06:38.765279    5852 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 22:06:38.765625    5852 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20220516220221-2444\config.json ...
	I0516 22:06:39.836592    5852 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:06:39.836664    5852 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:06:39.836987    5852 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:06:39.837067    5852 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:06:39.837220    5852 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:06:39.837260    5852 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:06:39.837497    5852 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:06:39.837637    5852 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:06:39.837669    5852 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:06:42.033467    5852 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:06:42.033467    5852 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:06:42.033467    5852 start.go:352] acquiring machines lock for functional-20220516220221-2444: {Name:mkdcc2ea8456bfc6c4e9b4af97ac214783a7ee2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:06:42.033467    5852 start.go:356] acquired machines lock for "functional-20220516220221-2444" in 0s
	I0516 22:06:42.034128    5852 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:06:42.034214    5852 fix.go:55] fixHost starting: 
	I0516 22:06:42.053721    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:06:43.048238    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:06:43.048238    5852 fix.go:103] recreateIfNeeded on functional-20220516220221-2444: state= err=unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:06:43.048238    5852 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:06:43.053124    5852 out.go:177] * docker "functional-20220516220221-2444" container is missing, will recreate.
	I0516 22:06:43.055065    5852 delete.go:124] DEMOLISHING functional-20220516220221-2444 ...
	I0516 22:06:43.069384    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:06:44.062602    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	W0516 22:06:44.062602    5852 stop.go:75] unable to get state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:06:44.062602    5852 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:06:44.081078    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:06:45.126022    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:06:45.126022    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0446786s)
	I0516 22:06:45.126146    5852 delete.go:82] Unable to get host status for functional-20220516220221-2444, assuming it has already been deleted: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:06:45.134248    5852 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220516220221-2444
	W0516 22:06:46.130244    5852 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220516220221-2444 returned with exit code 1
	I0516 22:06:46.130287    5852 kic.go:356] could not find the container functional-20220516220221-2444 to remove it. will try anyways
	I0516 22:06:46.138918    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:06:47.166233    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:06:47.166261    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0271799s)
	W0516 22:06:47.166398    5852 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:06:47.174189    5852 cli_runner.go:164] Run: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0"
	W0516 22:06:48.199162    5852 cli_runner.go:211] docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:06:48.199162    5852 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0": (1.024747s)
	I0516 22:06:48.199162    5852 oci.go:641] error shutdown functional-20220516220221-2444: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:06:49.211175    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:06:50.235472    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:06:50.235539    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0242923s)
	I0516 22:06:50.235716    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:06:50.235716    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:06:50.235775    5852 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:06:50.800393    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:06:51.836012    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:06:51.836012    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.035414s)
	I0516 22:06:51.836012    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:06:51.836012    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:06:51.836012    5852 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:06:52.944005    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:06:53.945040    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:06:53.945040    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0010304s)
	I0516 22:06:53.945040    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:06:53.945040    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:06:53.945040    5852 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:06:55.274815    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:06:56.281516    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:06:56.281516    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0066968s)
	I0516 22:06:56.281516    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:06:56.281516    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:06:56.281516    5852 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:06:57.890354    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:06:58.915594    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:06:58.915594    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0252355s)
	I0516 22:06:58.915594    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:06:58.915594    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:06:58.915594    5852 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:01.270523    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:07:02.293525    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:07:02.293792    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0228678s)
	I0516 22:07:02.293792    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:02.293792    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:07:02.293792    5852 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:06.822634    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:07:07.848842    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:07:07.848842    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0260767s)
	I0516 22:07:07.848985    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:07.848985    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:07:07.849055    5852 oci.go:88] couldn't shut down functional-20220516220221-2444 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	 
	I0516 22:07:07.857435    5852 cli_runner.go:164] Run: docker rm -f -v functional-20220516220221-2444
	I0516 22:07:08.883377    5852 cli_runner.go:217] Completed: docker rm -f -v functional-20220516220221-2444: (1.0259367s)
	I0516 22:07:08.891224    5852 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220516220221-2444
	W0516 22:07:09.930306    5852 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220516220221-2444 returned with exit code 1
	I0516 22:07:09.930441    5852 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220516220221-2444: (1.0390776s)
	I0516 22:07:09.939309    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:07:11.000604    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:07:11.000735    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0611022s)
	I0516 22:07:11.009399    5852 network_create.go:272] running [docker network inspect functional-20220516220221-2444] to gather additional debugging logs...
	I0516 22:07:11.009399    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444
	W0516 22:07:12.046399    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 returned with exit code 1
	I0516 22:07:12.046399    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444: (1.0369953s)
	I0516 22:07:12.046399    5852 network_create.go:275] error running [docker network inspect functional-20220516220221-2444]: docker network inspect functional-20220516220221-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220516220221-2444
	I0516 22:07:12.046399    5852 network_create.go:277] output of [docker network inspect functional-20220516220221-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220516220221-2444
	
	** /stderr **
	W0516 22:07:12.047537    5852 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:07:12.047716    5852 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:07:13.052977    5852 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:07:13.057120    5852 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0516 22:07:13.057404    5852 start.go:165] libmachine.API.Create for "functional-20220516220221-2444" (driver="docker")
	I0516 22:07:13.057404    5852 client.go:168] LocalClient.Create starting
	I0516 22:07:13.058286    5852 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:07:13.058575    5852 main.go:134] libmachine: Decoding PEM data...
	I0516 22:07:13.058632    5852 main.go:134] libmachine: Parsing certificate...
	I0516 22:07:13.058950    5852 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:07:13.058950    5852 main.go:134] libmachine: Decoding PEM data...
	I0516 22:07:13.058950    5852 main.go:134] libmachine: Parsing certificate...
	I0516 22:07:13.067996    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:07:14.141149    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:07:14.141149    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0728931s)
	I0516 22:07:14.149750    5852 network_create.go:272] running [docker network inspect functional-20220516220221-2444] to gather additional debugging logs...
	I0516 22:07:14.149750    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444
	W0516 22:07:15.188474    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 returned with exit code 1
	I0516 22:07:15.188474    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444: (1.0387192s)
	I0516 22:07:15.188474    5852 network_create.go:275] error running [docker network inspect functional-20220516220221-2444]: docker network inspect functional-20220516220221-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220516220221-2444
	I0516 22:07:15.188474    5852 network_create.go:277] output of [docker network inspect functional-20220516220221-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220516220221-2444
	
	** /stderr **
	I0516 22:07:15.198253    5852 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:07:16.217518    5852 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0190552s)
	I0516 22:07:16.235320    5852 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8] misses:0}
	I0516 22:07:16.235320    5852 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:07:16.235320    5852 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:07:16.244562    5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
	W0516 22:07:17.247524    5852 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
	I0516 22:07:17.247524    5852 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0029574s)
	W0516 22:07:17.247524    5852 network_create.go:107] failed to create docker network functional-20220516220221-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:07:17.261827    5852 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:false}} dirty:map[] misses:0}
	I0516 22:07:17.261827    5852 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:07:17.278250    5852 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8] misses:0}
	I0516 22:07:17.278396    5852 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:07:17.278396    5852 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:07:17.287025    5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
	W0516 22:07:18.296218    5852 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
	I0516 22:07:18.296218    5852 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0091886s)
	W0516 22:07:18.296218    5852 network_create.go:107] failed to create docker network functional-20220516220221-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:07:18.311703    5852 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8] misses:1}
	I0516 22:07:18.311703    5852 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:07:18.325787    5852 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8] misses:1}
	I0516 22:07:18.325787    5852 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:07:18.325787    5852 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:07:18.335228    5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
	W0516 22:07:19.369645    5852 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
	I0516 22:07:19.369645    5852 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0344123s)
	W0516 22:07:19.369645    5852 network_create.go:107] failed to create docker network functional-20220516220221-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:07:19.386275    5852 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8] misses:2}
	I0516 22:07:19.386275    5852 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:07:19.401320    5852 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] misses:2}
	I0516 22:07:19.401320    5852 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:07:19.401320    5852 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:07:19.408139    5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
	W0516 22:07:20.461499    5852 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
	I0516 22:07:20.461499    5852 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0533544s)
	E0516 22:07:20.461927    5852 network_create.go:104] error while trying to create docker network functional-20220516220221-2444 192.168.76.0/24: create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2b639637f074ced5bf54082ee5531d87dde24e32bb4e4786e00fd679a5ce6f04 (br-2b639637f074): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:07:20.462222    5852 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2b639637f074ced5bf54082ee5531d87dde24e32bb4e4786e00fd679a5ce6f04 (br-2b639637f074): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:07:20.477213    5852 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:07:21.507936    5852 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0306643s)
	I0516 22:07:21.516883    5852 cli_runner.go:164] Run: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:07:22.547986    5852 cli_runner.go:211] docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:07:22.548018    5852 cli_runner.go:217] Completed: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0309506s)
	I0516 22:07:22.548173    5852 client.go:171] LocalClient.Create took 9.4906934s
	I0516 22:07:24.574043    5852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:07:24.582081    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:07:25.611644    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:07:25.611644    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0295586s)
	I0516 22:07:25.611644    5852 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:25.795199    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:07:26.798402    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:07:26.798439    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0030389s)
	W0516 22:07:26.798465    5852 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:07:26.798465    5852 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:26.809383    5852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:07:26.816394    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:07:27.858611    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:07:27.858611    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0422118s)
	I0516 22:07:27.858611    5852 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:28.073000    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:07:29.107576    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:07:29.107638    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0343639s)
	W0516 22:07:29.107638    5852 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:07:29.107638    5852 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:29.107638    5852 start.go:134] duration metric: createHost completed in 16.0545847s
	I0516 22:07:29.120371    5852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:07:29.129045    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:07:30.174991    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:07:30.174991    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0459417s)
	I0516 22:07:30.174991    5852 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:30.515550    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:07:31.532933    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:07:31.532933    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0173779s)
	W0516 22:07:31.532933    5852 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:07:31.532933    5852 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:31.545827    5852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:07:31.555898    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:07:32.612377    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:07:32.612425    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0563284s)
	I0516 22:07:32.612729    5852 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:32.853330    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:07:33.897877    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:07:33.897877    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0444354s)
	W0516 22:07:33.898144    5852 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:07:33.898196    5852 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:33.898196    5852 fix.go:57] fixHost completed within 51.8637908s
	I0516 22:07:33.898196    5852 start.go:81] releasing machines lock for "functional-20220516220221-2444", held for 51.8644883s
	W0516 22:07:33.898399    5852 start.go:608] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	W0516 22:07:33.898681    5852 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	
	I0516 22:07:33.898726    5852 start.go:623] Will try again in 5 seconds ...
	I0516 22:07:38.914246    5852 start.go:352] acquiring machines lock for functional-20220516220221-2444: {Name:mkdcc2ea8456bfc6c4e9b4af97ac214783a7ee2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:07:38.914246    5852 start.go:356] acquired machines lock for "functional-20220516220221-2444" in 0s
	I0516 22:07:38.914246    5852 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:07:38.914246    5852 fix.go:55] fixHost starting: 
	I0516 22:07:38.929100    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:07:39.973298    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:07:39.973321    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0439131s)
	I0516 22:07:39.973394    5852 fix.go:103] recreateIfNeeded on functional-20220516220221-2444: state= err=unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:39.973394    5852 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:07:39.977612    5852 out.go:177] * docker "functional-20220516220221-2444" container is missing, will recreate.
	I0516 22:07:39.979706    5852 delete.go:124] DEMOLISHING functional-20220516220221-2444 ...
	I0516 22:07:39.993215    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:07:41.016603    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:07:41.016603    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0233825s)
	W0516 22:07:41.016603    5852 stop.go:75] unable to get state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:41.016603    5852 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:41.037625    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:07:42.084230    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:07:42.084230    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0465996s)
	I0516 22:07:42.084230    5852 delete.go:82] Unable to get host status for functional-20220516220221-2444, assuming it has already been deleted: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:42.091223    5852 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220516220221-2444
	W0516 22:07:43.101826    5852 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220516220221-2444 returned with exit code 1
	I0516 22:07:43.101826    5852 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220516220221-2444: (1.0105985s)
	I0516 22:07:43.101826    5852 kic.go:356] could not find the container functional-20220516220221-2444 to remove it. will try anyways
	I0516 22:07:43.112922    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:07:44.139273    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:07:44.139273    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0263466s)
	W0516 22:07:44.139273    5852 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:44.148507    5852 cli_runner.go:164] Run: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0"
	W0516 22:07:45.187649    5852 cli_runner.go:211] docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:07:45.187649    5852 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0": (1.0389849s)
	I0516 22:07:45.187649    5852 oci.go:641] error shutdown functional-20220516220221-2444: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:46.198976    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:07:47.222052    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:07:47.222052    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0229972s)
	I0516 22:07:47.222126    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:47.222126    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:07:47.222170    5852 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:47.719123    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:07:48.740971    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:07:48.741111    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.021687s)
	I0516 22:07:48.741111    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:48.741111    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:07:48.741111    5852 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:49.351577    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:07:50.386444    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:07:50.386590    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.034687s)
	I0516 22:07:50.386590    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:50.386590    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:07:50.386590    5852 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:51.299154    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:07:52.322362    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:07:52.322362    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0231563s)
	I0516 22:07:52.322646    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:52.322646    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:07:52.322646    5852 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:54.333536    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:07:55.377643    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:07:55.377643    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0441022s)
	I0516 22:07:55.377643    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:55.377643    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:07:55.377643    5852 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:57.219610    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:07:58.254218    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:07:58.254252    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0344902s)
	I0516 22:07:58.254420    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:07:58.254466    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:07:58.254496    5852 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:08:00.938347    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:08:01.979848    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:08:01.979883    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0413879s)
	I0516 22:08:01.979954    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:08:01.979954    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
	I0516 22:08:01.980026    5852 oci.go:88] couldn't shut down functional-20220516220221-2444 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	 
	I0516 22:08:01.988556    5852 cli_runner.go:164] Run: docker rm -f -v functional-20220516220221-2444
	I0516 22:08:02.994879    5852 cli_runner.go:217] Completed: docker rm -f -v functional-20220516220221-2444: (1.0061362s)
	I0516 22:08:03.003708    5852 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220516220221-2444
	W0516 22:08:04.043419    5852 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220516220221-2444 returned with exit code 1
	I0516 22:08:04.043419    5852 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220516220221-2444: (1.039552s)
	I0516 22:08:04.051561    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:08:05.081774    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:08:05.081774    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0302085s)
	I0516 22:08:05.090445    5852 network_create.go:272] running [docker network inspect functional-20220516220221-2444] to gather additional debugging logs...
	I0516 22:08:05.090445    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444
	W0516 22:08:06.111971    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 returned with exit code 1
	I0516 22:08:06.111971    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444: (1.0215211s)
	I0516 22:08:06.111971    5852 network_create.go:275] error running [docker network inspect functional-20220516220221-2444]: docker network inspect functional-20220516220221-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220516220221-2444
	I0516 22:08:06.111971    5852 network_create.go:277] output of [docker network inspect functional-20220516220221-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220516220221-2444
	
	** /stderr **
	W0516 22:08:06.113224    5852 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:08:06.113224    5852 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:08:07.116774    5852 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:08:07.120577    5852 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0516 22:08:07.120862    5852 start.go:165] libmachine.API.Create for "functional-20220516220221-2444" (driver="docker")
	I0516 22:08:07.120862    5852 client.go:168] LocalClient.Create starting
	I0516 22:08:07.121701    5852 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:08:07.121995    5852 main.go:134] libmachine: Decoding PEM data...
	I0516 22:08:07.122036    5852 main.go:134] libmachine: Parsing certificate...
	I0516 22:08:07.122096    5852 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:08:07.122096    5852 main.go:134] libmachine: Decoding PEM data...
	I0516 22:08:07.122096    5852 main.go:134] libmachine: Parsing certificate...
	I0516 22:08:07.131141    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:08:08.145352    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:08:08.145573    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0140756s)
	I0516 22:08:08.153956    5852 network_create.go:272] running [docker network inspect functional-20220516220221-2444] to gather additional debugging logs...
	I0516 22:08:08.153956    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444
	W0516 22:08:09.174362    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 returned with exit code 1
	I0516 22:08:09.174362    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444: (1.0204007s)
	I0516 22:08:09.174362    5852 network_create.go:275] error running [docker network inspect functional-20220516220221-2444]: docker network inspect functional-20220516220221-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220516220221-2444
	I0516 22:08:09.174362    5852 network_create.go:277] output of [docker network inspect functional-20220516220221-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220516220221-2444
	
	** /stderr **
	I0516 22:08:09.182932    5852 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:08:10.195219    5852 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0122821s)
	I0516 22:08:10.212092    5852 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] misses:2}
	I0516 22:08:10.212092    5852 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:08:10.228420    5852 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] misses:3}
	I0516 22:08:10.228420    5852 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:08:10.244785    5852 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] amended:false}} dirty:map[] misses:0}
	I0516 22:08:10.244785    5852 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:08:10.258795    5852 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] amended:false}} dirty:map[] misses:0}
	I0516 22:08:10.258795    5852 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:08:10.273742    5852 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8 192.168.85.0:0xc000802530] misses:0}
	I0516 22:08:10.273742    5852 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:08:10.273742    5852 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:08:10.282768    5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
	W0516 22:08:11.307037    5852 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
	I0516 22:08:11.307089    5852 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0241123s)
	E0516 22:08:11.307164    5852 network_create.go:104] error while trying to create docker network functional-20220516220221-2444 192.168.85.0/24: create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 560a152dd5affb037c695a1ddfa127aa50d1a7210a7b7635805929face070e7a (br-560a152dd5af): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:08:11.307428    5852 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 560a152dd5affb037c695a1ddfa127aa50d1a7210a7b7635805929face070e7a (br-560a152dd5af): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:08:11.323114    5852 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:08:12.366822    5852 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0437032s)
	I0516 22:08:12.375636    5852 cli_runner.go:164] Run: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:08:13.393783    5852 cli_runner.go:211] docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:08:13.393783    5852 cli_runner.go:217] Completed: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0181419s)
	I0516 22:08:13.393783    5852 client.go:171] LocalClient.Create took 6.272891s
	I0516 22:08:15.414551    5852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:08:15.421561    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:08:16.451318    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:08:16.451318    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0296225s)
	I0516 22:08:16.451499    5852 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:08:16.732734    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:08:17.740302    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:08:17.740302    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0075633s)
	W0516 22:08:17.740302    5852 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:08:17.740302    5852 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:08:17.751760    5852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:08:17.758715    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:08:18.775961    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:08:18.775961    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.017241s)
	I0516 22:08:18.775961    5852 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:08:18.992746    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:08:20.013517    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:08:20.013517    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0207663s)
	W0516 22:08:20.013517    5852 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:08:20.013517    5852 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:08:20.013517    5852 start.go:134] duration metric: createHost completed in 12.8966816s
	I0516 22:08:20.024676    5852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:08:20.031751    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:08:21.055352    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:08:21.055352    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0235955s)
	I0516 22:08:21.055681    5852 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:08:21.379022    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:08:22.409423    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:08:22.409565    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0302162s)
	W0516 22:08:22.409565    5852 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:08:22.409565    5852 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:08:22.420439    5852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:08:22.426468    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:08:23.444624    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:08:23.444624    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0179634s)
	I0516 22:08:23.444624    5852 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:08:23.798533    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
	W0516 22:08:24.806240    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
	I0516 22:08:24.806350    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0075259s)
	W0516 22:08:24.806350    5852 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:08:24.806350    5852 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	I0516 22:08:24.806350    5852 fix.go:57] fixHost completed within 45.8918885s
	I0516 22:08:24.806350    5852 start.go:81] releasing machines lock for "functional-20220516220221-2444", held for 45.8918885s
	W0516 22:08:24.807088    5852 out.go:239] * Failed to start docker container. Running "minikube delete -p functional-20220516220221-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	
	I0516 22:08:24.812861    5852 out.go:177] 
	W0516 22:08:24.815298    5852 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
	
	W0516 22:08:24.815298    5852 out.go:239] * Suggestion: Restart Docker
	W0516 22:08:24.815298    5852 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 22:08:24.819508    5852 out.go:177] 
	
	* 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_703.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1230: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 logs failed: exit status 80
functional_test.go:1220: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
| Command |                Args                 |               Profile               |       User        |    Version     |     Start Time      |      End Time       |
|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
| delete  | --all                               | download-only-20220516215532-2444   | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:56 GMT | 16 May 22 21:56 GMT |
| delete  | -p                                  | download-only-20220516215532-2444   | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:56 GMT | 16 May 22 21:56 GMT |
|         | download-only-20220516215532-2444   |                                     |                   |                |                     |                     |
| delete  | -p                                  | download-only-20220516215532-2444   | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:56 GMT | 16 May 22 21:56 GMT |
|         | download-only-20220516215532-2444   |                                     |                   |                |                     |                     |
| delete  | -p                                  | download-docker-20220516215629-2444 | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:57 GMT | 16 May 22 21:57 GMT |
|         | download-docker-20220516215629-2444 |                                     |                   |                |                     |                     |
| delete  | -p                                  | binary-mirror-20220516215715-2444   | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:57 GMT | 16 May 22 21:57 GMT |
|         | binary-mirror-20220516215715-2444   |                                     |                   |                |                     |                     |
| delete  | -p addons-20220516215732-2444       | addons-20220516215732-2444          | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:58 GMT | 16 May 22 21:58 GMT |
| delete  | -p nospam-20220516215858-2444       | nospam-20220516215858-2444          | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:02 GMT | 16 May 22 22:02 GMT |
| cache   | functional-20220516220221-2444      | functional-20220516220221-2444      | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:05 GMT | 16 May 22 22:05 GMT |
|         | cache add k8s.gcr.io/pause:3.1      |                                     |                   |                |                     |                     |
| cache   | functional-20220516220221-2444      | functional-20220516220221-2444      | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:05 GMT | 16 May 22 22:05 GMT |
|         | cache add k8s.gcr.io/pause:3.3      |                                     |                   |                |                     |                     |
| cache   | functional-20220516220221-2444      | functional-20220516220221-2444      | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:05 GMT | 16 May 22 22:05 GMT |
|         | cache add                           |                                     |                   |                |                     |                     |
|         | k8s.gcr.io/pause:latest             |                                     |                   |                |                     |                     |
| cache   | delete k8s.gcr.io/pause:3.3         | minikube                            | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
| cache   | list                                | minikube                            | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
| cache   | functional-20220516220221-2444      | functional-20220516220221-2444      | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
|         | cache reload                        |                                     |                   |                |                     |                     |
| cache   | delete k8s.gcr.io/pause:3.1         | minikube                            | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
| cache   | delete k8s.gcr.io/pause:latest      | minikube                            | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2022/05/16 22:06:32
Running on machine: minikube2
Binary: Built with gc go1.18.2 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0516 22:06:31.999388    5852 out.go:296] Setting OutFile to fd 776 ...
I0516 22:06:32.057074    5852 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0516 22:06:32.057074    5852 out.go:309] Setting ErrFile to fd 972...
I0516 22:06:32.057074    5852 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0516 22:06:32.067765    5852 out.go:303] Setting JSON to false
I0516 22:06:32.070088    5852 start.go:115] hostinfo: {"hostname":"minikube2","uptime":1904,"bootTime":1652736888,"procs":147,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
W0516 22:06:32.070088    5852 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0516 22:06:32.074746    5852 out.go:177] * [functional-20220516220221-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
I0516 22:06:32.079203    5852 notify.go:193] Checking for updates...
I0516 22:06:32.081874    5852 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
I0516 22:06:32.084298    5852 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
I0516 22:06:32.086659    5852 out.go:177]   - MINIKUBE_LOCATION=12739
I0516 22:06:32.088941    5852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0516 22:06:32.091576    5852 config.go:178] Loaded profile config "functional-20220516220221-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0516 22:06:32.091576    5852 driver.go:358] Setting default libvirt URI to qemu:///system
I0516 22:06:34.645226    5852 docker.go:137] docker version: linux-20.10.14
I0516 22:06:34.654010    5852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0516 22:06:36.656610    5852 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0025907s)
I0516 22:06:36.657377    5852 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:45 SystemTime:2022-05-16 22:06:35.6434948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0516 22:06:36.662829    5852 out.go:177] * Using the docker driver based on existing profile
I0516 22:06:36.666827    5852 start.go:284] selected driver: docker
I0516 22:06:36.666827    5852 start.go:806] validating driver "docker" against &{Name:functional-20220516220221-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220516220221-2444 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false}
I0516 22:06:36.666827    5852 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0516 22:06:36.685847    5852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0516 22:06:38.688529    5852 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.002673s)
I0516 22:06:38.688529    5852 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:45 SystemTime:2022-05-16 22:06:37.6703272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0516 22:06:38.749106    5852 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0516 22:06:38.749106    5852 cni.go:95] Creating CNI manager for ""
I0516 22:06:38.749106    5852 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0516 22:06:38.749106    5852 start_flags.go:306] config:
{Name:functional-20220516220221-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220516220221-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false}
I0516 22:06:38.755744    5852 out.go:177] * Starting control plane node functional-20220516220221-2444 in cluster functional-20220516220221-2444
I0516 22:06:38.757562    5852 cache.go:120] Beginning downloading kic base image for docker with docker
I0516 22:06:38.760512    5852 out.go:177] * Pulling base image ...
I0516 22:06:38.763342    5852 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
I0516 22:06:38.764354    5852 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
I0516 22:06:38.764354    5852 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
I0516 22:06:38.764562    5852 cache.go:57] Caching tarball of preloaded images
I0516 22:06:38.765091    5852 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0516 22:06:38.765279    5852 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
I0516 22:06:38.765625    5852 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20220516220221-2444\config.json ...
I0516 22:06:39.836592    5852 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
I0516 22:06:39.836664    5852 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
I0516 22:06:39.836987    5852 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
I0516 22:06:39.837067    5852 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
I0516 22:06:39.837220    5852 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
I0516 22:06:39.837260    5852 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
I0516 22:06:39.837497    5852 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
I0516 22:06:39.837637    5852 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
I0516 22:06:39.837669    5852 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
I0516 22:06:42.033467    5852 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
I0516 22:06:42.033467    5852 cache.go:206] Successfully downloaded all kic artifacts
I0516 22:06:42.033467    5852 start.go:352] acquiring machines lock for functional-20220516220221-2444: {Name:mkdcc2ea8456bfc6c4e9b4af97ac214783a7ee2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0516 22:06:42.033467    5852 start.go:356] acquired machines lock for "functional-20220516220221-2444" in 0s
I0516 22:06:42.034128    5852 start.go:94] Skipping create...Using existing machine configuration
I0516 22:06:42.034214    5852 fix.go:55] fixHost starting: 
I0516 22:06:42.053721    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:43.048238    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:06:43.048238    5852 fix.go:103] recreateIfNeeded on functional-20220516220221-2444: state= err=unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:43.048238    5852 fix.go:108] machineExists: false. err=machine does not exist
I0516 22:06:43.053124    5852 out.go:177] * docker "functional-20220516220221-2444" container is missing, will recreate.
I0516 22:06:43.055065    5852 delete.go:124] DEMOLISHING functional-20220516220221-2444 ...
I0516 22:06:43.069384    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:44.062602    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
W0516 22:06:44.062602    5852 stop.go:75] unable to get state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:44.062602    5852 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:44.081078    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:45.126022    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:06:45.126022    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0446786s)
I0516 22:06:45.126146    5852 delete.go:82] Unable to get host status for functional-20220516220221-2444, assuming it has already been deleted: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:45.134248    5852 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220516220221-2444
W0516 22:06:46.130244    5852 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220516220221-2444 returned with exit code 1
I0516 22:06:46.130287    5852 kic.go:356] could not find the container functional-20220516220221-2444 to remove it. will try anyways
I0516 22:06:46.138918    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:47.166233    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:06:47.166261    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0271799s)
W0516 22:06:47.166398    5852 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:47.174189    5852 cli_runner.go:164] Run: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0"
W0516 22:06:48.199162    5852 cli_runner.go:211] docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0" returned with exit code 1
I0516 22:06:48.199162    5852 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0": (1.024747s)
I0516 22:06:48.199162    5852 oci.go:641] error shutdown functional-20220516220221-2444: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:49.211175    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:50.235472    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:06:50.235539    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0242923s)
I0516 22:06:50.235716    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:50.235716    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:06:50.235775    5852 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:50.800393    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:51.836012    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:06:51.836012    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.035414s)
I0516 22:06:51.836012    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:51.836012    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:06:51.836012    5852 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:52.944005    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:53.945040    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:06:53.945040    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0010304s)
I0516 22:06:53.945040    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:53.945040    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:06:53.945040    5852 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:55.274815    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:56.281516    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:06:56.281516    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0066968s)
I0516 22:06:56.281516    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:56.281516    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:06:56.281516    5852 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:57.890354    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:58.915594    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:06:58.915594    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0252355s)
I0516 22:06:58.915594    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:58.915594    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:06:58.915594    5852 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:01.270523    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:02.293525    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:02.293792    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0228678s)
I0516 22:07:02.293792    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:02.293792    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:07:02.293792    5852 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:06.822634    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:07.848842    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:07.848842    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0260767s)
I0516 22:07:07.848985    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:07.848985    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:07:07.849055    5852 oci.go:88] couldn't shut down functional-20220516220221-2444 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
I0516 22:07:07.857435    5852 cli_runner.go:164] Run: docker rm -f -v functional-20220516220221-2444
I0516 22:07:08.883377    5852 cli_runner.go:217] Completed: docker rm -f -v functional-20220516220221-2444: (1.0259367s)
I0516 22:07:08.891224    5852 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220516220221-2444
W0516 22:07:09.930306    5852 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220516220221-2444 returned with exit code 1
I0516 22:07:09.930441    5852 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220516220221-2444: (1.0390776s)
I0516 22:07:09.939309    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0516 22:07:11.000604    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0516 22:07:11.000735    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0611022s)
I0516 22:07:11.009399    5852 network_create.go:272] running [docker network inspect functional-20220516220221-2444] to gather additional debugging logs...
I0516 22:07:11.009399    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444
W0516 22:07:12.046399    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 returned with exit code 1
I0516 22:07:12.046399    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444: (1.0369953s)
I0516 22:07:12.046399    5852 network_create.go:275] error running [docker network inspect functional-20220516220221-2444]: docker network inspect functional-20220516220221-2444: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220516220221-2444
I0516 22:07:12.046399    5852 network_create.go:277] output of [docker network inspect functional-20220516220221-2444]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220516220221-2444

                                                
                                                
** /stderr **
W0516 22:07:12.047537    5852 delete.go:139] delete failed (probably ok) <nil>
I0516 22:07:12.047716    5852 fix.go:115] Sleeping 1 second for extra luck!
I0516 22:07:13.052977    5852 start.go:131] createHost starting for "" (driver="docker")
I0516 22:07:13.057120    5852 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0516 22:07:13.057404    5852 start.go:165] libmachine.API.Create for "functional-20220516220221-2444" (driver="docker")
I0516 22:07:13.057404    5852 client.go:168] LocalClient.Create starting
I0516 22:07:13.058286    5852 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I0516 22:07:13.058575    5852 main.go:134] libmachine: Decoding PEM data...
I0516 22:07:13.058632    5852 main.go:134] libmachine: Parsing certificate...
I0516 22:07:13.058950    5852 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I0516 22:07:13.058950    5852 main.go:134] libmachine: Decoding PEM data...
I0516 22:07:13.058950    5852 main.go:134] libmachine: Parsing certificate...
I0516 22:07:13.067996    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0516 22:07:14.141149    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0516 22:07:14.141149    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0728931s)
I0516 22:07:14.149750    5852 network_create.go:272] running [docker network inspect functional-20220516220221-2444] to gather additional debugging logs...
I0516 22:07:14.149750    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444
W0516 22:07:15.188474    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 returned with exit code 1
I0516 22:07:15.188474    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444: (1.0387192s)
I0516 22:07:15.188474    5852 network_create.go:275] error running [docker network inspect functional-20220516220221-2444]: docker network inspect functional-20220516220221-2444: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220516220221-2444
I0516 22:07:15.188474    5852 network_create.go:277] output of [docker network inspect functional-20220516220221-2444]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220516220221-2444

                                                
                                                
** /stderr **
I0516 22:07:15.198253    5852 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0516 22:07:16.217518    5852 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0190552s)
I0516 22:07:16.235320    5852 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8] misses:0}
I0516 22:07:16.235320    5852 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:07:16.235320    5852 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0516 22:07:16.244562    5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
W0516 22:07:17.247524    5852 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
I0516 22:07:17.247524    5852 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0029574s)
W0516 22:07:17.247524    5852 network_create.go:107] failed to create docker network functional-20220516220221-2444 192.168.49.0/24, will retry: subnet is taken
I0516 22:07:17.261827    5852 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:false}} dirty:map[] misses:0}
I0516 22:07:17.261827    5852 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:07:17.278250    5852 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8] misses:0}
I0516 22:07:17.278396    5852 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:07:17.278396    5852 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0516 22:07:17.287025    5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
W0516 22:07:18.296218    5852 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
I0516 22:07:18.296218    5852 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0091886s)
W0516 22:07:18.296218    5852 network_create.go:107] failed to create docker network functional-20220516220221-2444 192.168.58.0/24, will retry: subnet is taken
I0516 22:07:18.311703    5852 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8] misses:1}
I0516 22:07:18.311703    5852 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:07:18.325787    5852 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8] misses:1}
I0516 22:07:18.325787    5852 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:07:18.325787    5852 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0516 22:07:18.335228    5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
W0516 22:07:19.369645    5852 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
I0516 22:07:19.369645    5852 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0344123s)
W0516 22:07:19.369645    5852 network_create.go:107] failed to create docker network functional-20220516220221-2444 192.168.67.0/24, will retry: subnet is taken
I0516 22:07:19.386275    5852 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8] misses:2}
I0516 22:07:19.386275    5852 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:07:19.401320    5852 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] misses:2}
I0516 22:07:19.401320    5852 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:07:19.401320    5852 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I0516 22:07:19.408139    5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
W0516 22:07:20.461499    5852 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
I0516 22:07:20.461499    5852 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0533544s)
E0516 22:07:20.461927    5852 network_create.go:104] error while trying to create docker network functional-20220516220221-2444 192.168.76.0/24: create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 2b639637f074ced5bf54082ee5531d87dde24e32bb4e4786e00fd679a5ce6f04 (br-2b639637f074): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
W0516 22:07:20.462222    5852 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 2b639637f074ced5bf54082ee5531d87dde24e32bb4e4786e00fd679a5ce6f04 (br-2b639637f074): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4

                                                
                                                
I0516 22:07:20.477213    5852 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0516 22:07:21.507936    5852 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0306643s)
I0516 22:07:21.516883    5852 cli_runner.go:164] Run: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true
W0516 22:07:22.547986    5852 cli_runner.go:211] docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
I0516 22:07:22.548018    5852 cli_runner.go:217] Completed: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0309506s)
I0516 22:07:22.548173    5852 client.go:171] LocalClient.Create took 9.4906934s
I0516 22:07:24.574043    5852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0516 22:07:24.582081    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:07:25.611644    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:07:25.611644    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0295586s)
I0516 22:07:25.611644    5852 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:25.795199    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:07:26.798402    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:07:26.798439    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0030389s)
W0516 22:07:26.798465    5852 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
W0516 22:07:26.798465    5852 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:26.809383    5852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0516 22:07:26.816394    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:07:27.858611    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:07:27.858611    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0422118s)
I0516 22:07:27.858611    5852 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:28.073000    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:07:29.107576    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:07:29.107638    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0343639s)
W0516 22:07:29.107638    5852 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
W0516 22:07:29.107638    5852 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:29.107638    5852 start.go:134] duration metric: createHost completed in 16.0545847s
I0516 22:07:29.120371    5852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0516 22:07:29.129045    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:07:30.174991    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:07:30.174991    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0459417s)
I0516 22:07:30.174991    5852 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:30.515550    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:07:31.532933    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:07:31.532933    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0173779s)
W0516 22:07:31.532933    5852 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
W0516 22:07:31.532933    5852 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:31.545827    5852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0516 22:07:31.555898    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:07:32.612377    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:07:32.612425    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0563284s)
I0516 22:07:32.612729    5852 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:32.853330    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:07:33.897877    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:07:33.897877    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0444354s)
W0516 22:07:33.898144    5852 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
W0516 22:07:33.898196    5852 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:33.898196    5852 fix.go:57] fixHost completed within 51.8637908s
I0516 22:07:33.898196    5852 start.go:81] releasing machines lock for "functional-20220516220221-2444", held for 51.8644883s
W0516 22:07:33.898399    5852 start.go:608] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
W0516 22:07:33.898681    5852 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system

                                                
                                                
I0516 22:07:33.898726    5852 start.go:623] Will try again in 5 seconds ...
I0516 22:07:38.914246    5852 start.go:352] acquiring machines lock for functional-20220516220221-2444: {Name:mkdcc2ea8456bfc6c4e9b4af97ac214783a7ee2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0516 22:07:38.914246    5852 start.go:356] acquired machines lock for "functional-20220516220221-2444" in 0s
I0516 22:07:38.914246    5852 start.go:94] Skipping create...Using existing machine configuration
I0516 22:07:38.914246    5852 fix.go:55] fixHost starting: 
I0516 22:07:38.929100    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:39.973298    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:39.973321    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0439131s)
I0516 22:07:39.973394    5852 fix.go:103] recreateIfNeeded on functional-20220516220221-2444: state= err=unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:39.973394    5852 fix.go:108] machineExists: false. err=machine does not exist
I0516 22:07:39.977612    5852 out.go:177] * docker "functional-20220516220221-2444" container is missing, will recreate.
I0516 22:07:39.979706    5852 delete.go:124] DEMOLISHING functional-20220516220221-2444 ...
I0516 22:07:39.993215    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:41.016603    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:41.016603    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0233825s)
W0516 22:07:41.016603    5852 stop.go:75] unable to get state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:41.016603    5852 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:41.037625    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:42.084230    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:42.084230    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0465996s)
I0516 22:07:42.084230    5852 delete.go:82] Unable to get host status for functional-20220516220221-2444, assuming it has already been deleted: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:42.091223    5852 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220516220221-2444
W0516 22:07:43.101826    5852 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220516220221-2444 returned with exit code 1
I0516 22:07:43.101826    5852 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220516220221-2444: (1.0105985s)
I0516 22:07:43.101826    5852 kic.go:356] could not find the container functional-20220516220221-2444 to remove it. will try anyways
I0516 22:07:43.112922    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:44.139273    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:44.139273    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0263466s)
W0516 22:07:44.139273    5852 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:44.148507    5852 cli_runner.go:164] Run: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0"
W0516 22:07:45.187649    5852 cli_runner.go:211] docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0" returned with exit code 1
I0516 22:07:45.187649    5852 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0": (1.0389849s)
I0516 22:07:45.187649    5852 oci.go:641] error shutdown functional-20220516220221-2444: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:46.198976    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:47.222052    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:47.222052    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0229972s)
I0516 22:07:47.222126    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:47.222126    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:07:47.222170    5852 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:47.719123    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:48.740971    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:48.741111    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.021687s)
I0516 22:07:48.741111    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:48.741111    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:07:48.741111    5852 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:49.351577    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:50.386444    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:50.386590    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.034687s)
I0516 22:07:50.386590    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:50.386590    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:07:50.386590    5852 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:51.299154    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:52.322362    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:52.322362    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0231563s)
I0516 22:07:52.322646    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:52.322646    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:07:52.322646    5852 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:54.333536    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:55.377643    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:55.377643    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0441022s)
I0516 22:07:55.377643    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:55.377643    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:07:55.377643    5852 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:57.219610    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:58.254218    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:58.254252    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0344902s)
I0516 22:07:58.254420    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:58.254466    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:07:58.254496    5852 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:00.938347    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:08:01.979848    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:08:01.979883    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0413879s)
I0516 22:08:01.979954    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:01.979954    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:08:01.980026    5852 oci.go:88] couldn't shut down functional-20220516220221-2444 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
I0516 22:08:01.988556    5852 cli_runner.go:164] Run: docker rm -f -v functional-20220516220221-2444
I0516 22:08:02.994879    5852 cli_runner.go:217] Completed: docker rm -f -v functional-20220516220221-2444: (1.0061362s)
I0516 22:08:03.003708    5852 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220516220221-2444
W0516 22:08:04.043419    5852 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220516220221-2444 returned with exit code 1
I0516 22:08:04.043419    5852 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220516220221-2444: (1.039552s)
I0516 22:08:04.051561    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0516 22:08:05.081774    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0516 22:08:05.081774    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0302085s)
I0516 22:08:05.090445    5852 network_create.go:272] running [docker network inspect functional-20220516220221-2444] to gather additional debugging logs...
I0516 22:08:05.090445    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444
W0516 22:08:06.111971    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 returned with exit code 1
I0516 22:08:06.111971    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444: (1.0215211s)
I0516 22:08:06.111971    5852 network_create.go:275] error running [docker network inspect functional-20220516220221-2444]: docker network inspect functional-20220516220221-2444: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220516220221-2444
I0516 22:08:06.111971    5852 network_create.go:277] output of [docker network inspect functional-20220516220221-2444]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220516220221-2444

                                                
                                                
** /stderr **
W0516 22:08:06.113224    5852 delete.go:139] delete failed (probably ok) <nil>
I0516 22:08:06.113224    5852 fix.go:115] Sleeping 1 second for extra luck!
I0516 22:08:07.116774    5852 start.go:131] createHost starting for "" (driver="docker")
I0516 22:08:07.120577    5852 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0516 22:08:07.120862    5852 start.go:165] libmachine.API.Create for "functional-20220516220221-2444" (driver="docker")
I0516 22:08:07.120862    5852 client.go:168] LocalClient.Create starting
I0516 22:08:07.121701    5852 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I0516 22:08:07.121995    5852 main.go:134] libmachine: Decoding PEM data...
I0516 22:08:07.122036    5852 main.go:134] libmachine: Parsing certificate...
I0516 22:08:07.122096    5852 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I0516 22:08:07.122096    5852 main.go:134] libmachine: Decoding PEM data...
I0516 22:08:07.122096    5852 main.go:134] libmachine: Parsing certificate...
I0516 22:08:07.131141    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0516 22:08:08.145352    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0516 22:08:08.145573    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0140756s)
I0516 22:08:08.153956    5852 network_create.go:272] running [docker network inspect functional-20220516220221-2444] to gather additional debugging logs...
I0516 22:08:08.153956    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444
W0516 22:08:09.174362    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 returned with exit code 1
I0516 22:08:09.174362    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444: (1.0204007s)
I0516 22:08:09.174362    5852 network_create.go:275] error running [docker network inspect functional-20220516220221-2444]: docker network inspect functional-20220516220221-2444: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220516220221-2444
I0516 22:08:09.174362    5852 network_create.go:277] output of [docker network inspect functional-20220516220221-2444]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220516220221-2444

                                                
                                                
** /stderr **
I0516 22:08:09.182932    5852 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0516 22:08:10.195219    5852 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0122821s)
I0516 22:08:10.212092    5852 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] misses:2}
I0516 22:08:10.212092    5852 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:08:10.228420    5852 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] misses:3}
I0516 22:08:10.228420    5852 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:08:10.244785    5852 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] amended:false}} dirty:map[] misses:0}
I0516 22:08:10.244785    5852 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:08:10.258795    5852 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] amended:false}} dirty:map[] misses:0}
I0516 22:08:10.258795    5852 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:08:10.273742    5852 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8 192.168.85.0:0xc000802530] misses:0}
I0516 22:08:10.273742    5852 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:08:10.273742    5852 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0516 22:08:10.282768    5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
W0516 22:08:11.307037    5852 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
I0516 22:08:11.307089    5852 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0241123s)
E0516 22:08:11.307164    5852 network_create.go:104] error while trying to create docker network functional-20220516220221-2444 192.168.85.0/24: create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 560a152dd5affb037c695a1ddfa127aa50d1a7210a7b7635805929face070e7a (br-560a152dd5af): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
W0516 22:08:11.307428    5852 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 560a152dd5affb037c695a1ddfa127aa50d1a7210a7b7635805929face070e7a (br-560a152dd5af): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4

                                                
                                                
I0516 22:08:11.323114    5852 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0516 22:08:12.366822    5852 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0437032s)
I0516 22:08:12.375636    5852 cli_runner.go:164] Run: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true
W0516 22:08:13.393783    5852 cli_runner.go:211] docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
I0516 22:08:13.393783    5852 cli_runner.go:217] Completed: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0181419s)
I0516 22:08:13.393783    5852 client.go:171] LocalClient.Create took 6.272891s
I0516 22:08:15.414551    5852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0516 22:08:15.421561    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:08:16.451318    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:08:16.451318    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0296225s)
I0516 22:08:16.451499    5852 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:16.732734    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:08:17.740302    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:08:17.740302    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0075633s)
W0516 22:08:17.740302    5852 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
W0516 22:08:17.740302    5852 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:17.751760    5852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0516 22:08:17.758715    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:08:18.775961    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:08:18.775961    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.017241s)
I0516 22:08:18.775961    5852 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:18.992746    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:08:20.013517    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:08:20.013517    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0207663s)
W0516 22:08:20.013517    5852 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
W0516 22:08:20.013517    5852 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:20.013517    5852 start.go:134] duration metric: createHost completed in 12.8966816s
I0516 22:08:20.024676    5852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0516 22:08:20.031751    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:08:21.055352    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:08:21.055352    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0235955s)
I0516 22:08:21.055681    5852 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:21.379022    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:08:22.409423    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:08:22.409565    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0302162s)
W0516 22:08:22.409565    5852 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
W0516 22:08:22.409565    5852 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:22.420439    5852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0516 22:08:22.426468    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:08:23.444624    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:08:23.444624    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0179634s)
I0516 22:08:23.444624    5852 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:23.798533    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:08:24.806240    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:08:24.806350    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0075259s)
W0516 22:08:24.806350    5852 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
W0516 22:08:24.806350    5852 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:24.806350    5852 fix.go:57] fixHost completed within 45.8918885s
I0516 22:08:24.806350    5852 start.go:81] releasing machines lock for "functional-20220516220221-2444", held for 45.8918885s
W0516 22:08:24.807088    5852 out.go:239] * Failed to start docker container. Running "minikube delete -p functional-20220516220221-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system

                                                
                                                
I0516 22:08:24.812861    5852 out.go:177] 
W0516 22:08:24.815298    5852 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system

                                                
                                                
W0516 22:08:24.815298    5852 out.go:239] * Suggestion: Restart Docker
W0516 22:08:24.815298    5852 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
I0516 22:08:24.819508    5852 out.go:177] 

                                                
                                                
* 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsCmd (3.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd295237646\001\logs.txt
functional_test.go:1242: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd295237646\001\logs.txt: exit status 80 (4.1472187s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_703.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1244: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd295237646\001\logs.txt failed: exit status 80
functional_test.go:1247: expected empty minikube logs output, but got: 
***
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_703.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr *****
functional_test.go:1220: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
| Command |                Args                 |               Profile               |       User        |    Version     |     Start Time      |      End Time       |
|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
| delete  | --all                               | download-only-20220516215532-2444   | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:56 GMT | 16 May 22 21:56 GMT |
| delete  | -p                                  | download-only-20220516215532-2444   | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:56 GMT | 16 May 22 21:56 GMT |
|         | download-only-20220516215532-2444   |                                     |                   |                |                     |                     |
| delete  | -p                                  | download-only-20220516215532-2444   | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:56 GMT | 16 May 22 21:56 GMT |
|         | download-only-20220516215532-2444   |                                     |                   |                |                     |                     |
| delete  | -p                                  | download-docker-20220516215629-2444 | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:57 GMT | 16 May 22 21:57 GMT |
|         | download-docker-20220516215629-2444 |                                     |                   |                |                     |                     |
| delete  | -p                                  | binary-mirror-20220516215715-2444   | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:57 GMT | 16 May 22 21:57 GMT |
|         | binary-mirror-20220516215715-2444   |                                     |                   |                |                     |                     |
| delete  | -p addons-20220516215732-2444       | addons-20220516215732-2444          | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:58 GMT | 16 May 22 21:58 GMT |
| delete  | -p nospam-20220516215858-2444       | nospam-20220516215858-2444          | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:02 GMT | 16 May 22 22:02 GMT |
| cache   | functional-20220516220221-2444      | functional-20220516220221-2444      | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:05 GMT | 16 May 22 22:05 GMT |
|         | cache add k8s.gcr.io/pause:3.1      |                                     |                   |                |                     |                     |
| cache   | functional-20220516220221-2444      | functional-20220516220221-2444      | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:05 GMT | 16 May 22 22:05 GMT |
|         | cache add k8s.gcr.io/pause:3.3      |                                     |                   |                |                     |                     |
| cache   | functional-20220516220221-2444      | functional-20220516220221-2444      | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:05 GMT | 16 May 22 22:05 GMT |
|         | cache add                           |                                     |                   |                |                     |                     |
|         | k8s.gcr.io/pause:latest             |                                     |                   |                |                     |                     |
| cache   | delete k8s.gcr.io/pause:3.3         | minikube                            | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
| cache   | list                                | minikube                            | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
| cache   | functional-20220516220221-2444      | functional-20220516220221-2444      | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
|         | cache reload                        |                                     |                   |                |                     |                     |
| cache   | delete k8s.gcr.io/pause:3.1         | minikube                            | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
| cache   | delete k8s.gcr.io/pause:latest      | minikube                            | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2022/05/16 22:06:32
Running on machine: minikube2
Binary: Built with gc go1.18.2 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0516 22:06:31.999388    5852 out.go:296] Setting OutFile to fd 776 ...
I0516 22:06:32.057074    5852 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0516 22:06:32.057074    5852 out.go:309] Setting ErrFile to fd 972...
I0516 22:06:32.057074    5852 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0516 22:06:32.067765    5852 out.go:303] Setting JSON to false
I0516 22:06:32.070088    5852 start.go:115] hostinfo: {"hostname":"minikube2","uptime":1904,"bootTime":1652736888,"procs":147,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
W0516 22:06:32.070088    5852 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0516 22:06:32.074746    5852 out.go:177] * [functional-20220516220221-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
I0516 22:06:32.079203    5852 notify.go:193] Checking for updates...
I0516 22:06:32.081874    5852 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
I0516 22:06:32.084298    5852 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
I0516 22:06:32.086659    5852 out.go:177]   - MINIKUBE_LOCATION=12739
I0516 22:06:32.088941    5852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0516 22:06:32.091576    5852 config.go:178] Loaded profile config "functional-20220516220221-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0516 22:06:32.091576    5852 driver.go:358] Setting default libvirt URI to qemu:///system
I0516 22:06:34.645226    5852 docker.go:137] docker version: linux-20.10.14
I0516 22:06:34.654010    5852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0516 22:06:36.656610    5852 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0025907s)
I0516 22:06:36.657377    5852 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:45 SystemTime:2022-05-16 22:06:35.6434948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0516 22:06:36.662829    5852 out.go:177] * Using the docker driver based on existing profile
I0516 22:06:36.666827    5852 start.go:284] selected driver: docker
I0516 22:06:36.666827    5852 start.go:806] validating driver "docker" against &{Name:functional-20220516220221-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220516220221-2444 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false}
I0516 22:06:36.666827    5852 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0516 22:06:36.685847    5852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0516 22:06:38.688529    5852 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.002673s)
I0516 22:06:38.688529    5852 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:45 SystemTime:2022-05-16 22:06:37.6703272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0516 22:06:38.749106    5852 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0516 22:06:38.749106    5852 cni.go:95] Creating CNI manager for ""
I0516 22:06:38.749106    5852 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0516 22:06:38.749106    5852 start_flags.go:306] config:
{Name:functional-20220516220221-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220516220221-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false}
I0516 22:06:38.755744    5852 out.go:177] * Starting control plane node functional-20220516220221-2444 in cluster functional-20220516220221-2444
I0516 22:06:38.757562    5852 cache.go:120] Beginning downloading kic base image for docker with docker
I0516 22:06:38.760512    5852 out.go:177] * Pulling base image ...
I0516 22:06:38.763342    5852 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
I0516 22:06:38.764354    5852 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
I0516 22:06:38.764354    5852 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
I0516 22:06:38.764562    5852 cache.go:57] Caching tarball of preloaded images
I0516 22:06:38.765091    5852 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0516 22:06:38.765279    5852 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
I0516 22:06:38.765625    5852 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20220516220221-2444\config.json ...
I0516 22:06:39.836592    5852 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
I0516 22:06:39.836664    5852 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
I0516 22:06:39.836987    5852 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
I0516 22:06:39.837067    5852 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
I0516 22:06:39.837220    5852 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
I0516 22:06:39.837260    5852 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
I0516 22:06:39.837497    5852 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
I0516 22:06:39.837637    5852 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
I0516 22:06:39.837669    5852 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
I0516 22:06:42.033467    5852 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
I0516 22:06:42.033467    5852 cache.go:206] Successfully downloaded all kic artifacts
I0516 22:06:42.033467    5852 start.go:352] acquiring machines lock for functional-20220516220221-2444: {Name:mkdcc2ea8456bfc6c4e9b4af97ac214783a7ee2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0516 22:06:42.033467    5852 start.go:356] acquired machines lock for "functional-20220516220221-2444" in 0s
I0516 22:06:42.034128    5852 start.go:94] Skipping create...Using existing machine configuration
I0516 22:06:42.034214    5852 fix.go:55] fixHost starting: 
I0516 22:06:42.053721    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:43.048238    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:06:43.048238    5852 fix.go:103] recreateIfNeeded on functional-20220516220221-2444: state= err=unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:43.048238    5852 fix.go:108] machineExists: false. err=machine does not exist
I0516 22:06:43.053124    5852 out.go:177] * docker "functional-20220516220221-2444" container is missing, will recreate.
I0516 22:06:43.055065    5852 delete.go:124] DEMOLISHING functional-20220516220221-2444 ...
I0516 22:06:43.069384    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:44.062602    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
W0516 22:06:44.062602    5852 stop.go:75] unable to get state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:44.062602    5852 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:44.081078    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:45.126022    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:06:45.126022    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0446786s)
I0516 22:06:45.126146    5852 delete.go:82] Unable to get host status for functional-20220516220221-2444, assuming it has already been deleted: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:45.134248    5852 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220516220221-2444
W0516 22:06:46.130244    5852 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220516220221-2444 returned with exit code 1
I0516 22:06:46.130287    5852 kic.go:356] could not find the container functional-20220516220221-2444 to remove it. will try anyways
I0516 22:06:46.138918    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:47.166233    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:06:47.166261    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0271799s)
W0516 22:06:47.166398    5852 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:47.174189    5852 cli_runner.go:164] Run: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0"
W0516 22:06:48.199162    5852 cli_runner.go:211] docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0" returned with exit code 1
I0516 22:06:48.199162    5852 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0": (1.024747s)
I0516 22:06:48.199162    5852 oci.go:641] error shutdown functional-20220516220221-2444: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:49.211175    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:50.235472    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:06:50.235539    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0242923s)
I0516 22:06:50.235716    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:50.235716    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:06:50.235775    5852 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:50.800393    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:51.836012    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:06:51.836012    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.035414s)
I0516 22:06:51.836012    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:51.836012    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:06:51.836012    5852 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:52.944005    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:53.945040    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:06:53.945040    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0010304s)
I0516 22:06:53.945040    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:53.945040    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:06:53.945040    5852 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:55.274815    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:56.281516    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:06:56.281516    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0066968s)
I0516 22:06:56.281516    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:56.281516    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:06:56.281516    5852 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:57.890354    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:06:58.915594    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:06:58.915594    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0252355s)
I0516 22:06:58.915594    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:06:58.915594    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:06:58.915594    5852 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:01.270523    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:02.293525    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:02.293792    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0228678s)
I0516 22:07:02.293792    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:02.293792    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:07:02.293792    5852 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:06.822634    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:07.848842    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:07.848842    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0260767s)
I0516 22:07:07.848985    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:07.848985    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:07:07.849055    5852 oci.go:88] couldn't shut down functional-20220516220221-2444 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
I0516 22:07:07.857435    5852 cli_runner.go:164] Run: docker rm -f -v functional-20220516220221-2444
I0516 22:07:08.883377    5852 cli_runner.go:217] Completed: docker rm -f -v functional-20220516220221-2444: (1.0259367s)
I0516 22:07:08.891224    5852 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220516220221-2444
W0516 22:07:09.930306    5852 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220516220221-2444 returned with exit code 1
I0516 22:07:09.930441    5852 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220516220221-2444: (1.0390776s)
I0516 22:07:09.939309    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0516 22:07:11.000604    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0516 22:07:11.000735    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0611022s)
I0516 22:07:11.009399    5852 network_create.go:272] running [docker network inspect functional-20220516220221-2444] to gather additional debugging logs...
I0516 22:07:11.009399    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444
W0516 22:07:12.046399    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 returned with exit code 1
I0516 22:07:12.046399    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444: (1.0369953s)
I0516 22:07:12.046399    5852 network_create.go:275] error running [docker network inspect functional-20220516220221-2444]: docker network inspect functional-20220516220221-2444: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220516220221-2444
I0516 22:07:12.046399    5852 network_create.go:277] output of [docker network inspect functional-20220516220221-2444]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220516220221-2444

                                                
                                                
** /stderr **
W0516 22:07:12.047537    5852 delete.go:139] delete failed (probably ok) <nil>
I0516 22:07:12.047716    5852 fix.go:115] Sleeping 1 second for extra luck!
I0516 22:07:13.052977    5852 start.go:131] createHost starting for "" (driver="docker")
I0516 22:07:13.057120    5852 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0516 22:07:13.057404    5852 start.go:165] libmachine.API.Create for "functional-20220516220221-2444" (driver="docker")
I0516 22:07:13.057404    5852 client.go:168] LocalClient.Create starting
I0516 22:07:13.058286    5852 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I0516 22:07:13.058575    5852 main.go:134] libmachine: Decoding PEM data...
I0516 22:07:13.058632    5852 main.go:134] libmachine: Parsing certificate...
I0516 22:07:13.058950    5852 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I0516 22:07:13.058950    5852 main.go:134] libmachine: Decoding PEM data...
I0516 22:07:13.058950    5852 main.go:134] libmachine: Parsing certificate...
I0516 22:07:13.067996    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0516 22:07:14.141149    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0516 22:07:14.141149    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0728931s)
I0516 22:07:14.149750    5852 network_create.go:272] running [docker network inspect functional-20220516220221-2444] to gather additional debugging logs...
I0516 22:07:14.149750    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444
W0516 22:07:15.188474    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 returned with exit code 1
I0516 22:07:15.188474    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444: (1.0387192s)
I0516 22:07:15.188474    5852 network_create.go:275] error running [docker network inspect functional-20220516220221-2444]: docker network inspect functional-20220516220221-2444: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220516220221-2444
I0516 22:07:15.188474    5852 network_create.go:277] output of [docker network inspect functional-20220516220221-2444]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220516220221-2444

                                                
                                                
** /stderr **
I0516 22:07:15.198253    5852 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0516 22:07:16.217518    5852 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0190552s)
I0516 22:07:16.235320    5852 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8] misses:0}
I0516 22:07:16.235320    5852 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:07:16.235320    5852 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0516 22:07:16.244562    5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
W0516 22:07:17.247524    5852 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
I0516 22:07:17.247524    5852 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0029574s)
W0516 22:07:17.247524    5852 network_create.go:107] failed to create docker network functional-20220516220221-2444 192.168.49.0/24, will retry: subnet is taken
I0516 22:07:17.261827    5852 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:false}} dirty:map[] misses:0}
I0516 22:07:17.261827    5852 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:07:17.278250    5852 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8] misses:0}
I0516 22:07:17.278396    5852 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:07:17.278396    5852 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0516 22:07:17.287025    5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
W0516 22:07:18.296218    5852 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
I0516 22:07:18.296218    5852 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0091886s)
W0516 22:07:18.296218    5852 network_create.go:107] failed to create docker network functional-20220516220221-2444 192.168.58.0/24, will retry: subnet is taken
I0516 22:07:18.311703    5852 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8] misses:1}
I0516 22:07:18.311703    5852 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:07:18.325787    5852 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8] misses:1}
I0516 22:07:18.325787    5852 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:07:18.325787    5852 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0516 22:07:18.335228    5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
W0516 22:07:19.369645    5852 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
I0516 22:07:19.369645    5852 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0344123s)
W0516 22:07:19.369645    5852 network_create.go:107] failed to create docker network functional-20220516220221-2444 192.168.67.0/24, will retry: subnet is taken
I0516 22:07:19.386275    5852 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8] misses:2}
I0516 22:07:19.386275    5852 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:07:19.401320    5852 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] misses:2}
I0516 22:07:19.401320    5852 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:07:19.401320    5852 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I0516 22:07:19.408139    5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
W0516 22:07:20.461499    5852 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
I0516 22:07:20.461499    5852 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0533544s)
E0516 22:07:20.461927    5852 network_create.go:104] error while trying to create docker network functional-20220516220221-2444 192.168.76.0/24: create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 2b639637f074ced5bf54082ee5531d87dde24e32bb4e4786e00fd679a5ce6f04 (br-2b639637f074): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
W0516 22:07:20.462222    5852 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220516220221-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 2b639637f074ced5bf54082ee5531d87dde24e32bb4e4786e00fd679a5ce6f04 (br-2b639637f074): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4

                                                
                                                
I0516 22:07:20.477213    5852 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0516 22:07:21.507936    5852 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0306643s)
I0516 22:07:21.516883    5852 cli_runner.go:164] Run: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true
W0516 22:07:22.547986    5852 cli_runner.go:211] docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
I0516 22:07:22.548018    5852 cli_runner.go:217] Completed: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0309506s)
I0516 22:07:22.548173    5852 client.go:171] LocalClient.Create took 9.4906934s
I0516 22:07:24.574043    5852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0516 22:07:24.582081    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:07:25.611644    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:07:25.611644    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0295586s)
I0516 22:07:25.611644    5852 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:25.795199    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:07:26.798402    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:07:26.798439    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0030389s)
W0516 22:07:26.798465    5852 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
W0516 22:07:26.798465    5852 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:26.809383    5852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0516 22:07:26.816394    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:07:27.858611    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:07:27.858611    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0422118s)
I0516 22:07:27.858611    5852 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:28.073000    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:07:29.107576    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:07:29.107638    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0343639s)
W0516 22:07:29.107638    5852 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
W0516 22:07:29.107638    5852 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:29.107638    5852 start.go:134] duration metric: createHost completed in 16.0545847s
I0516 22:07:29.120371    5852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0516 22:07:29.129045    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:07:30.174991    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:07:30.174991    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0459417s)
I0516 22:07:30.174991    5852 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:30.515550    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:07:31.532933    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:07:31.532933    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0173779s)
W0516 22:07:31.532933    5852 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
W0516 22:07:31.532933    5852 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:31.545827    5852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0516 22:07:31.555898    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:07:32.612377    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:07:32.612425    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0563284s)
I0516 22:07:32.612729    5852 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:32.853330    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:07:33.897877    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:07:33.897877    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0444354s)
W0516 22:07:33.898144    5852 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
W0516 22:07:33.898196    5852 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:33.898196    5852 fix.go:57] fixHost completed within 51.8637908s
I0516 22:07:33.898196    5852 start.go:81] releasing machines lock for "functional-20220516220221-2444", held for 51.8644883s
W0516 22:07:33.898399    5852 start.go:608] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system
W0516 22:07:33.898681    5852 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system

                                                
                                                
I0516 22:07:33.898726    5852 start.go:623] Will try again in 5 seconds ...
I0516 22:07:38.914246    5852 start.go:352] acquiring machines lock for functional-20220516220221-2444: {Name:mkdcc2ea8456bfc6c4e9b4af97ac214783a7ee2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0516 22:07:38.914246    5852 start.go:356] acquired machines lock for "functional-20220516220221-2444" in 0s
I0516 22:07:38.914246    5852 start.go:94] Skipping create...Using existing machine configuration
I0516 22:07:38.914246    5852 fix.go:55] fixHost starting: 
I0516 22:07:38.929100    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:39.973298    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:39.973321    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0439131s)
I0516 22:07:39.973394    5852 fix.go:103] recreateIfNeeded on functional-20220516220221-2444: state= err=unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:39.973394    5852 fix.go:108] machineExists: false. err=machine does not exist
I0516 22:07:39.977612    5852 out.go:177] * docker "functional-20220516220221-2444" container is missing, will recreate.
I0516 22:07:39.979706    5852 delete.go:124] DEMOLISHING functional-20220516220221-2444 ...
I0516 22:07:39.993215    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:41.016603    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:41.016603    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0233825s)
W0516 22:07:41.016603    5852 stop.go:75] unable to get state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:41.016603    5852 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:41.037625    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:42.084230    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:42.084230    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0465996s)
I0516 22:07:42.084230    5852 delete.go:82] Unable to get host status for functional-20220516220221-2444, assuming it has already been deleted: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:42.091223    5852 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220516220221-2444
W0516 22:07:43.101826    5852 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220516220221-2444 returned with exit code 1
I0516 22:07:43.101826    5852 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220516220221-2444: (1.0105985s)
I0516 22:07:43.101826    5852 kic.go:356] could not find the container functional-20220516220221-2444 to remove it. will try anyways
I0516 22:07:43.112922    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:44.139273    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:44.139273    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0263466s)
W0516 22:07:44.139273    5852 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:44.148507    5852 cli_runner.go:164] Run: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0"
W0516 22:07:45.187649    5852 cli_runner.go:211] docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0" returned with exit code 1
I0516 22:07:45.187649    5852 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0": (1.0389849s)
I0516 22:07:45.187649    5852 oci.go:641] error shutdown functional-20220516220221-2444: docker exec --privileged -t functional-20220516220221-2444 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:46.198976    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:47.222052    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:47.222052    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0229972s)
I0516 22:07:47.222126    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:47.222126    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:07:47.222170    5852 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:47.719123    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:48.740971    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:48.741111    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.021687s)
I0516 22:07:48.741111    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:48.741111    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:07:48.741111    5852 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:49.351577    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:50.386444    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:50.386590    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.034687s)
I0516 22:07:50.386590    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:50.386590    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:07:50.386590    5852 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:51.299154    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:52.322362    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:52.322362    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0231563s)
I0516 22:07:52.322646    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:52.322646    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:07:52.322646    5852 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:54.333536    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:55.377643    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:55.377643    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0441022s)
I0516 22:07:55.377643    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:55.377643    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:07:55.377643    5852 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:57.219610    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:07:58.254218    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:07:58.254252    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0344902s)
I0516 22:07:58.254420    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:07:58.254466    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:07:58.254496    5852 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:00.938347    5852 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
W0516 22:08:01.979848    5852 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
I0516 22:08:01.979883    5852 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (1.0413879s)
I0516 22:08:01.979954    5852 oci.go:653] temporary error verifying shutdown: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:01.979954    5852 oci.go:655] temporary error: container functional-20220516220221-2444 status is  but expect it to be exited
I0516 22:08:01.980026    5852 oci.go:88] couldn't shut down functional-20220516220221-2444 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
I0516 22:08:01.988556    5852 cli_runner.go:164] Run: docker rm -f -v functional-20220516220221-2444
I0516 22:08:02.994879    5852 cli_runner.go:217] Completed: docker rm -f -v functional-20220516220221-2444: (1.0061362s)
I0516 22:08:03.003708    5852 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220516220221-2444
W0516 22:08:04.043419    5852 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220516220221-2444 returned with exit code 1
I0516 22:08:04.043419    5852 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220516220221-2444: (1.039552s)
I0516 22:08:04.051561    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0516 22:08:05.081774    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0516 22:08:05.081774    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0302085s)
I0516 22:08:05.090445    5852 network_create.go:272] running [docker network inspect functional-20220516220221-2444] to gather additional debugging logs...
I0516 22:08:05.090445    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444
W0516 22:08:06.111971    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 returned with exit code 1
I0516 22:08:06.111971    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444: (1.0215211s)
I0516 22:08:06.111971    5852 network_create.go:275] error running [docker network inspect functional-20220516220221-2444]: docker network inspect functional-20220516220221-2444: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220516220221-2444
I0516 22:08:06.111971    5852 network_create.go:277] output of [docker network inspect functional-20220516220221-2444]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220516220221-2444

                                                
                                                
** /stderr **
W0516 22:08:06.113224    5852 delete.go:139] delete failed (probably ok) <nil>
I0516 22:08:06.113224    5852 fix.go:115] Sleeping 1 second for extra luck!
I0516 22:08:07.116774    5852 start.go:131] createHost starting for "" (driver="docker")
I0516 22:08:07.120577    5852 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0516 22:08:07.120862    5852 start.go:165] libmachine.API.Create for "functional-20220516220221-2444" (driver="docker")
I0516 22:08:07.120862    5852 client.go:168] LocalClient.Create starting
I0516 22:08:07.121701    5852 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I0516 22:08:07.121995    5852 main.go:134] libmachine: Decoding PEM data...
I0516 22:08:07.122036    5852 main.go:134] libmachine: Parsing certificate...
I0516 22:08:07.122096    5852 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I0516 22:08:07.122096    5852 main.go:134] libmachine: Decoding PEM data...
I0516 22:08:07.122096    5852 main.go:134] libmachine: Parsing certificate...
I0516 22:08:07.131141    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0516 22:08:08.145352    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0516 22:08:08.145573    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0140756s)
I0516 22:08:08.153956    5852 network_create.go:272] running [docker network inspect functional-20220516220221-2444] to gather additional debugging logs...
I0516 22:08:08.153956    5852 cli_runner.go:164] Run: docker network inspect functional-20220516220221-2444
W0516 22:08:09.174362    5852 cli_runner.go:211] docker network inspect functional-20220516220221-2444 returned with exit code 1
I0516 22:08:09.174362    5852 cli_runner.go:217] Completed: docker network inspect functional-20220516220221-2444: (1.0204007s)
I0516 22:08:09.174362    5852 network_create.go:275] error running [docker network inspect functional-20220516220221-2444]: docker network inspect functional-20220516220221-2444: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220516220221-2444
I0516 22:08:09.174362    5852 network_create.go:277] output of [docker network inspect functional-20220516220221-2444]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220516220221-2444

                                                
                                                
** /stderr **
I0516 22:08:09.182932    5852 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0516 22:08:10.195219    5852 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0122821s)
I0516 22:08:10.212092    5852 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] misses:2}
I0516 22:08:10.212092    5852 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:08:10.228420    5852 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] misses:3}
I0516 22:08:10.228420    5852 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:08:10.244785    5852 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] amended:false}} dirty:map[] misses:0}
I0516 22:08:10.244785    5852 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:08:10.258795    5852 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] amended:false}} dirty:map[] misses:0}
I0516 22:08:10.258795    5852 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:08:10.273742    5852 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8] amended:true}} dirty:map[192.168.49.0:0xc0007f61d8 192.168.58.0:0xc0007f67d8 192.168.67.0:0xc0005905f8 192.168.76.0:0xc0000063e8 192.168.85.0:0xc000802530] misses:0}
I0516 22:08:10.273742    5852 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 22:08:10.273742    5852 network_create.go:115] attempt to create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0516 22:08:10.282768    5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444
W0516 22:08:11.307037    5852 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444 returned with exit code 1
I0516 22:08:11.307089    5852 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: (1.0241123s)
E0516 22:08:11.307164    5852 network_create.go:104] error while trying to create docker network functional-20220516220221-2444 192.168.85.0/24: create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 560a152dd5affb037c695a1ddfa127aa50d1a7210a7b7635805929face070e7a (br-560a152dd5af): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
W0516 22:08:11.307428    5852 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220516220221-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 560a152dd5affb037c695a1ddfa127aa50d1a7210a7b7635805929face070e7a (br-560a152dd5af): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4

                                                
                                                
I0516 22:08:11.323114    5852 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0516 22:08:12.366822    5852 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0437032s)
I0516 22:08:12.375636    5852 cli_runner.go:164] Run: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true
W0516 22:08:13.393783    5852 cli_runner.go:211] docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
I0516 22:08:13.393783    5852 cli_runner.go:217] Completed: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0181419s)
I0516 22:08:13.393783    5852 client.go:171] LocalClient.Create took 6.272891s
I0516 22:08:15.414551    5852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0516 22:08:15.421561    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:08:16.451318    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:08:16.451318    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0296225s)
I0516 22:08:16.451499    5852 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:16.732734    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:08:17.740302    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:08:17.740302    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0075633s)
W0516 22:08:17.740302    5852 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
W0516 22:08:17.740302    5852 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:17.751760    5852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0516 22:08:17.758715    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:08:18.775961    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:08:18.775961    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.017241s)
I0516 22:08:18.775961    5852 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:18.992746    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:08:20.013517    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:08:20.013517    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0207663s)
W0516 22:08:20.013517    5852 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
W0516 22:08:20.013517    5852 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:20.013517    5852 start.go:134] duration metric: createHost completed in 12.8966816s
I0516 22:08:20.024676    5852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0516 22:08:20.031751    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:08:21.055352    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:08:21.055352    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0235955s)
I0516 22:08:21.055681    5852 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:21.379022    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:08:22.409423    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:08:22.409565    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0302162s)
W0516 22:08:22.409565    5852 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
W0516 22:08:22.409565    5852 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:22.420439    5852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0516 22:08:22.426468    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:08:23.444624    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:08:23.444624    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0179634s)
I0516 22:08:23.444624    5852 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:23.798533    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444
W0516 22:08:24.806240    5852 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444 returned with exit code 1
I0516 22:08:24.806350    5852 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: (1.0075259s)
W0516 22:08:24.806350    5852 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444

                                                
                                                
W0516 22:08:24.806350    5852 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220516220221-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220516220221-2444: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220516220221-2444
I0516 22:08:24.806350    5852 fix.go:57] fixHost completed within 45.8918885s
I0516 22:08:24.806350    5852 start.go:81] releasing machines lock for "functional-20220516220221-2444", held for 45.8918885s
W0516 22:08:24.807088    5852 out.go:239] * Failed to start docker container. Running "minikube delete -p functional-20220516220221-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system

                                                
                                                
I0516 22:08:24.812861    5852 out.go:177] 
W0516 22:08:24.815298    5852 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220516220221-2444 container: docker volume create functional-20220516220221-2444 --label name.minikube.sigs.k8s.io=functional-20220516220221-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220516220221-2444: error while creating volume root path '/var/lib/docker/volumes/functional-20220516220221-2444': mkdir /var/lib/docker/volumes/functional-20220516220221-2444: read-only file system

                                                
                                                
W0516 22:08:24.815298    5852 out.go:239] * Suggestion: Restart Docker
W0516 22:08:24.815298    5852 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
I0516 22:08:24.819508    5852 out.go:177] 

                                                
                                                
* 
***
--- FAIL: TestFunctional/serial/LogsFileCmd (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (13.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 status: exit status 7 (2.9629413s)

                                                
                                                
-- stdout --
	functional-20220516220221-2444
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:08:45.880699    4908 status.go:258] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	E0516 22:08:45.880699    4908 status.go:261] The "functional-20220516220221-2444" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:848: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 status" : exit status 7
functional_test.go:852: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (2.9817404s)

                                                
                                                
-- stdout --
	host:Nonexistent,kublet:Nonexistent,apiserver:Nonexistent,kubeconfig:Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:08:48.862449    9212 status.go:258] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	E0516 22:08:48.862449    9212 status.go:261] The "functional-20220516220221-2444" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:854: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:864: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 status -o json: exit status 7 (2.9967403s)

                                                
                                                
-- stdout --
	{"Name":"functional-20220516220221-2444","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:08:51.858307    6800 status.go:258] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	E0516 22:08:51.858307    6800 status.go:261] The "functional-20220516220221-2444" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:866: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.1792252s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (3.0345905s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:08:56.099132    6796 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/StatusCmd (13.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (5.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220516220221-2444 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1432: (dbg) Non-zero exit: kubectl --context functional-20220516220221-2444 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8: exit status 1 (300.1097ms)

                                                
                                                
** stderr ** 
	W0516 22:08:55.304905    5084 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20220516220221-2444" does not exist

                                                
                                                
** /stderr **
functional_test.go:1436: failed to create hello-node deployment with this command "kubectl --context functional-20220516220221-2444 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8": exit status 1.
functional_test.go:1401: service test failed - dumping debug information
functional_test.go:1402: -----------------------service failure post-mortem--------------------------------
functional_test.go:1405: (dbg) Run:  kubectl --context functional-20220516220221-2444 describe po hello-node
functional_test.go:1405: (dbg) Non-zero exit: kubectl --context functional-20220516220221-2444 describe po hello-node: exit status 1 (306.313ms)

                                                
                                                
** stderr ** 
	W0516 22:08:55.622508    7648 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220516220221-2444
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1407: "kubectl --context functional-20220516220221-2444 describe po hello-node" failed: exit status 1
functional_test.go:1409: hello-node pod describe:
functional_test.go:1411: (dbg) Run:  kubectl --context functional-20220516220221-2444 logs -l app=hello-node
functional_test.go:1411: (dbg) Non-zero exit: kubectl --context functional-20220516220221-2444 logs -l app=hello-node: exit status 1 (294.1782ms)

                                                
                                                
** stderr ** 
	W0516 22:08:55.924939    7196 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220516220221-2444
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1413: "kubectl --context functional-20220516220221-2444 logs -l app=hello-node" failed: exit status 1
functional_test.go:1415: hello-node logs:
functional_test.go:1417: (dbg) Run:  kubectl --context functional-20220516220221-2444 describe svc hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1417: (dbg) Non-zero exit: kubectl --context functional-20220516220221-2444 describe svc hello-node: exit status 1 (310.8853ms)

                                                
                                                
** stderr ** 
	W0516 22:08:56.237356    5752 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220516220221-2444
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1419: "kubectl --context functional-20220516220221-2444 describe svc hello-node" failed: exit status 1
functional_test.go:1421: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.1403174s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (2.9966287s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:09:00.440053    4172 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/ServiceCmd (5.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (5.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220516220221-2444 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20220516220221-2444 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8: exit status 1 (307.8206ms)

                                                
                                                
** stderr ** 
	W0516 22:08:49.774090    4536 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20220516220221-2444" does not exist

                                                
                                                
** /stderr **
functional_test.go:1562: failed to create hello-node deployment with this command "kubectl --context functional-20220516220221-2444 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8": exit status 1.
functional_test.go:1527: service test failed - dumping debug information
functional_test.go:1528: -----------------------service failure post-mortem--------------------------------
functional_test.go:1531: (dbg) Run:  kubectl --context functional-20220516220221-2444 describe po hello-node-connect
functional_test.go:1531: (dbg) Non-zero exit: kubectl --context functional-20220516220221-2444 describe po hello-node-connect: exit status 1 (293.3156ms)

                                                
                                                
** stderr ** 
	W0516 22:08:50.082034    7888 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220516220221-2444
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1533: "kubectl --context functional-20220516220221-2444 describe po hello-node-connect" failed: exit status 1
functional_test.go:1535: hello-node pod describe:
functional_test.go:1537: (dbg) Run:  kubectl --context functional-20220516220221-2444 logs -l app=hello-node-connect
functional_test.go:1537: (dbg) Non-zero exit: kubectl --context functional-20220516220221-2444 logs -l app=hello-node-connect: exit status 1 (309.2251ms)

                                                
                                                
** stderr ** 
	W0516 22:08:50.380786    5288 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220516220221-2444
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1539: "kubectl --context functional-20220516220221-2444 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1541: hello-node logs:
functional_test.go:1543: (dbg) Run:  kubectl --context functional-20220516220221-2444 describe svc hello-node-connect
functional_test.go:1543: (dbg) Non-zero exit: kubectl --context functional-20220516220221-2444 describe svc hello-node-connect: exit status 1 (309.7091ms)

                                                
                                                
** stderr ** 
	W0516 22:08:50.703879    9156 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220516220221-2444
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1545: "kubectl --context functional-20220516220221-2444 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1547: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.2139304s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (3.0729235s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:08:55.055409    6776 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (5.54s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-20220516220221-2444" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.2077013s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (2.9183609s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:09:24.174429    7856 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (10.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "echo hello": exit status 80 (3.3021721s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_addons_d61ea6249774aeb558bf50466bbbb86924adfa03_1.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1659: failed to run an ssh command. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh \"echo hello\"" : exit status 80
functional_test.go:1663: expected minikube ssh command output to be -"hello"- but got *"\n\n"*. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh \"echo hello\""
functional_test.go:1671: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "cat /etc/hostname": exit status 80 (3.2465165s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_38bcdef24fb924cc90e97c11e7d475c51b658987_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1677: failed to run an ssh command. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh \"cat /etc/hostname\"" : exit status 80
functional_test.go:1681: expected minikube ssh command output to be -"functional-20220516220221-2444"- but got *"\n\n"*. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/SSHCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.1753092s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (3.0448619s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:08:51.485954    4156 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/SSHCmd (10.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (12.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 cp testdata\cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 cp testdata\cp-test.txt /home/docker/cp-test.txt: exit status 80 (3.2225487s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_d6dc9f92b06d1e3892ec2580ca1ffb1975c7d2f1_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:559: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 cp testdata\\cp-test.txt /home/docker/cp-test.txt" : exit status 80
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh -n functional-20220516220221-2444 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh -n functional-20220516220221-2444 "sudo cat /home/docker/cp-test.txt": exit status 80 (3.1939421s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_15a8ec4b54c4600ccdf64f589dd9f75cfe823832_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:537: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh -n functional-20220516220221-2444 \"sudo cat /home/docker/cp-test.txt\"" : exit status 80
helpers_test.go:571: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"\n\n",
)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 cp functional-20220516220221-2444:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd109301697\001\cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 cp functional-20220516220221-2444:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd109301697\001\cp-test.txt: exit status 80 (3.1876299s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                    │
	│    * If the above advice does not help, please let us know:                                                        │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                      │
	│                                                                                                                    │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                           │
	│    * Please also attach the following file to the GitHub issue:                                                    │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_cp_b7b0cf51ac10f194f8c0a8cc1fbacb5f94d6c309_0.log    │
	│                                                                                                                    │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:559: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 cp functional-20220516220221-2444:/home/docker/cp-test.txt C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\TestFunctionalparallelCpCmd109301697\\001\\cp-test.txt" : exit status 80
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh -n functional-20220516220221-2444 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh -n functional-20220516220221-2444 "sudo cat /home/docker/cp-test.txt": exit status 80 (3.1531335s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f9fbdc48f4e6e25fa352a85c2bd7e3c2c13fee65_11.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:537: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh -n functional-20220516220221-2444 \"sudo cat /home/docker/cp-test.txt\"" : exit status 80
helpers_test.go:526: failed to read test file 'testdata/cp-test.txt' : open C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd109301697\001\cp-test.txt: The system cannot find the file specified.
helpers_test.go:571: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"\n\n",
+ 	"",
)
--- FAIL: TestFunctional/parallel/CpCmd (12.77s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (4.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220516220221-2444 replace --force -f testdata\mysql.yaml
functional_test.go:1719: (dbg) Non-zero exit: kubectl --context functional-20220516220221-2444 replace --force -f testdata\mysql.yaml: exit status 1 (300.7935ms)

                                                
                                                
** stderr ** 
	W0516 22:09:15.816276    8616 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20220516220221-2444" does not exist

                                                
                                                
** /stderr **
functional_test.go:1721: failed to kubectl replace mysql: args "kubectl --context functional-20220516220221-2444 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.170581s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (2.982753s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:09:20.032711    5156 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/MySQL (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (7.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/2444/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo cat /etc/test/nested/copy/2444/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1857: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo cat /etc/test/nested/copy/2444/hosts": exit status 80 (3.2903908s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_a152963a1b1efb5c3f0eee4862b3ae0b488040d2_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1859: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo cat /etc/test/nested/copy/2444/hosts" failed: exit status 80
functional_test.go:1862: file sync test content: 

                                                
                                                
functional_test.go:1872: /etc/sync.test content mismatch (-want +got):
string(
- 	"Test file for checking file sync process",
+ 	"\n\n",
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/FileSync]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.1301318s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (2.8747496s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:09:11.189830    9104 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/FileSync (7.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (23.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/2444.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo cat /etc/ssl/certs/2444.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo cat /etc/ssl/certs/2444.pem": exit status 80 (3.1844439s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_9315c892df9c880ad078c60229ff34e7ca642e19_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1901: failed to check existence of "/etc/ssl/certs/2444.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh \"sudo cat /etc/ssl/certs/2444.pem\"": exit status 80
functional_test.go:1907: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/2444.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
)
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/2444.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo cat /usr/share/ca-certificates/2444.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo cat /usr/share/ca-certificates/2444.pem": exit status 80 (3.2435344s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_3a279297743c69c42b870b336b3432f7f592685c_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1901: failed to check existence of "/usr/share/ca-certificates/2444.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh \"sudo cat /usr/share/ca-certificates/2444.pem\"": exit status 80
functional_test.go:1907: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/2444.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
)
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 80 (3.1971138s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_fea49abfab0323d8512b535581403500420d48f0_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1901: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 80
functional_test.go:1907: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
)
functional_test.go:1925: Checking for existence of /etc/ssl/certs/24442.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo cat /etc/ssl/certs/24442.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo cat /etc/ssl/certs/24442.pem": exit status 80 (3.275912s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_image_3b5cffde4c8f5338c83cf8286af01893eed50c81_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1928: failed to check existence of "/etc/ssl/certs/24442.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh \"sudo cat /etc/ssl/certs/24442.pem\"": exit status 80
functional_test.go:1934: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/24442.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
)
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/24442.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo cat /usr/share/ca-certificates/24442.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo cat /usr/share/ca-certificates/24442.pem": exit status 80 (3.2235269s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                    │
	│    * If the above advice does not help, please let us know:                                                        │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                      │
	│                                                                                                                    │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                           │
	│    * Please also attach the following file to the GitHub issue:                                                    │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_cp_61e6e7c82587b4e90872857c87eff14ac40e447c_1.log    │
	│                                                                                                                    │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1928: failed to check existence of "/usr/share/ca-certificates/24442.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh \"sudo cat /usr/share/ca-certificates/24442.pem\"": exit status 80
functional_test.go:1934: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/24442.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
)
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 80 (3.2007031s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f9fbdc48f4e6e25fa352a85c2bd7e3c2c13fee65_11.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1928: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 80
functional_test.go:1934: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/CertSync]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.1430806s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (2.9672505s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:09:34.621761    7984 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/CertSync (23.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (4.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220516220221-2444 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Non-zero exit: kubectl --context functional-20220516220221-2444 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (319.2196ms)

                                                
                                                
** stderr ** 
	W0516 22:09:11.427339    2796 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220516220221-2444
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:216: failed to 'kubectl get nodes' with args "kubectl --context functional-20220516220221-2444 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:222: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	W0516 22:09:11.427339    2796 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220516220221-2444
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:222: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	W0516 22:09:11.427339    2796 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220516220221-2444
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:222: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	W0516 22:09:11.427339    2796 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220516220221-2444
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:222: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	W0516 22:09:11.427339    2796 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220516220221-2444
	* cluster has no server defined

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220516220221-2444: exit status 1 (1.1201203s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220516220221-2444 -n functional-20220516220221-2444: exit status 7 (2.9492965s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:09:15.575351    2676 status.go:247] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220516220221-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/NodeLabels (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (3.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo systemctl is-active crio"
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh "sudo systemctl is-active crio": exit status 80 (3.2857121s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_4c116c6236290140afdbb5dcaafee8e0c3250b76_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1956: output of 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_4c116c6236290140afdbb5dcaafee8e0c3250b76_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **: exit status 80
functional_test.go:1959: For runtime "docker": expected "crio" to be inactive but got "\n\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (3.29s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (10.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:491: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220516220221-2444"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:491: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220516220221-2444": exit status 1 (10.1011841s)

                                                
                                                
-- stdout --
	functional-20220516220221-2444
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_docker-env_547776f721aba6dceba24106cb61c1127a06fa4f_3.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	false : The term 'false' is not recognized as the name of a cmdlet, function, script file, or operable program. Check 
	the spelling of the name, or if a path was included, verify that the path is correct and try again.
	At line:1 char:1
	+ false exit code 80
	+ ~~~~~
	    + CategoryInfo          : ObjectNotFound: (false:String) [], CommandNotFoundException
	    + FullyQualifiedErrorId : CommandNotFoundException
	 
	E0516 22:09:07.079382    6992 status.go:258] status error: host: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	E0516 22:09:07.079382    6992 status.go:261] The "functional-20220516220221-2444" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:497: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/powershell (10.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (3.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 version -o=json --components: exit status 80 (3.2106508s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_version_584df66c7473738ba6bddab8b00bad09d875c20e_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2198: error version: exit status 80
functional_test.go:2203: expected to see "buildctl" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "commit" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "containerd" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "crictl" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "crio" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "ctr" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "docker" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "minikubeVersion" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "podman" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "run" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "crun" in the minikube version --components but got:

                                                
                                                

                                                
                                                
--- FAIL: TestFunctional/parallel/Version/components (3.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:143: failed to get Kubernetes client for "functional-20220516220221-2444": client config: context "functional-20220516220221-2444" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls --format short: (2.9133216s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls --format short:

                                                
                                                
functional_test.go:270: expected k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls --format table: (2.8975887s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls --format table:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:270: expected | k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls --format json: (2.9312992s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls --format json:
[]
functional_test.go:270: expected ["k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (2.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls --format yaml: (2.9603787s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls --format yaml:
[]

                                                
                                                
functional_test.go:270: expected - k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (2.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 ssh pgrep buildkitd: exit status 80 (3.1239074s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f5578f3b7737bbd9a15ad6eab50db6197ebdaf5a_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image build -t localhost/my-image:functional-20220516220221-2444 testdata\build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image build -t localhost/my-image:functional-20220516220221-2444 testdata\build: (2.890392s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls: (2.8431471s)
functional_test.go:438: expected "localhost/my-image:functional-20220516220221-2444" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (8.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:337: (dbg) Non-zero exit: docker pull gcr.io/google-containers/addon-resizer:1.8.8: exit status 1 (2.0497307s)

                                                
                                                
** stderr ** 
	Error response from daemon: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown

                                                
                                                
** /stderr **
functional_test.go:339: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220516220221-2444: (3.0792144s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls: (2.9566337s)
functional_test.go:438: expected "gcr.io/google-containers/addon-resizer:functional-20220516220221-2444" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 update-context --alsologtostderr -v=2: exit status 80 (3.1649523s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:09:34.877653    5780 out.go:296] Setting OutFile to fd 828 ...
	I0516 22:09:34.947092    5780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:09:34.947092    5780 out.go:309] Setting ErrFile to fd 980...
	I0516 22:09:34.947092    5780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:09:34.958836    5780 mustload.go:65] Loading cluster: functional-20220516220221-2444
	I0516 22:09:34.959609    5780 config.go:178] Loaded profile config "functional-20220516220221-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:09:34.976338    5780 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:09:37.549846    5780 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:09:37.549846    5780 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (2.5734958s)
	I0516 22:09:37.553808    5780 out.go:177] 
	W0516 22:09:37.556795    5780 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:09:37.556795    5780 out.go:239] * 
	* 
	W0516 22:09:37.780428    5780 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_3.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_3.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0516 22:09:37.785408    5780 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:2047: failed to run minikube update-context: args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:2052: update-context: got="\n\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (3.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 update-context --alsologtostderr -v=2: exit status 80 (3.2565289s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:09:37.218632    2980 out.go:296] Setting OutFile to fd 708 ...
	I0516 22:09:37.287276    2980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:09:37.287276    2980 out.go:309] Setting ErrFile to fd 756...
	I0516 22:09:37.287276    2980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:09:37.301961    2980 mustload.go:65] Loading cluster: functional-20220516220221-2444
	I0516 22:09:37.303045    2980 config.go:178] Loaded profile config "functional-20220516220221-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:09:37.320980    2980 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:09:39.958553    2980 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:09:39.958747    2980 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (2.6375603s)
	I0516 22:09:39.964829    2980 out.go:177] 
	W0516 22:09:39.967286    2980 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:09:39.967286    2980 out.go:239] * 
	* 
	W0516 22:09:40.185605    2980 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_3.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_3.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0516 22:09:40.188606    2980 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:2047: failed to run minikube update-context: args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:2052: update-context: got="\n\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (3.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (3.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 update-context --alsologtostderr -v=2: exit status 80 (3.169415s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:09:36.732058    6872 out.go:296] Setting OutFile to fd 996 ...
	I0516 22:09:36.801564    6872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:09:36.801564    6872 out.go:309] Setting ErrFile to fd 724...
	I0516 22:09:36.801564    6872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:09:36.813851    6872 mustload.go:65] Loading cluster: functional-20220516220221-2444
	I0516 22:09:36.814714    6872 config.go:178] Loaded profile config "functional-20220516220221-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:09:36.833297    6872 cli_runner.go:164] Run: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}
	W0516 22:09:39.371256    6872 cli_runner.go:211] docker container inspect functional-20220516220221-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:09:39.371256    6872 cli_runner.go:217] Completed: docker container inspect functional-20220516220221-2444 --format={{.State.Status}}: (2.5379467s)
	I0516 22:09:39.377569    6872 out.go:177] 
	W0516 22:09:39.379579    6872 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220516220221-2444": docker container inspect functional-20220516220221-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220516220221-2444
	
	W0516 22:09:39.379579    6872 out.go:239] * 
	* 
	W0516 22:09:39.603580    6872 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_3.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_3.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0516 22:09:39.606584    6872 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:2047: failed to run minikube update-context: args "out/minikube-windows-amd64.exe -p functional-20220516220221-2444 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:2052: update-context: got="\n\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220516220221-2444: (3.0816887s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls: (2.9903135s)
functional_test.go:438: expected "gcr.io/google-containers/addon-resizer:functional-20220516220221-2444" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Non-zero exit: docker pull gcr.io/google-containers/addon-resizer:1.8.9: exit status 1 (2.0240235s)

                                                
                                                
** stderr ** 
	Error response from daemon: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown

                                                
                                                
** /stderr **
functional_test.go:232: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image save gcr.io/google-containers/addon-resizer:functional-20220516220221-2444 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image save gcr.io/google-containers/addon-resizer:functional-20220516220221-2444 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (2.9772278s)
functional_test.go:381: expected "C:\\jenkins\\workspace\\Docker_Windows_integration\\addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: exit status 80 (2.2606291s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_IMAGE_LOAD: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Docker_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_image_4f97aa0f12ba576a16ca2b05292f7afcda49921e_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:406: loading image into minikube from file: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_IMAGE_LOAD: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Docker_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_image_4f97aa0f12ba576a16ca2b05292f7afcda49921e_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220516220221-2444
functional_test.go:414: (dbg) Non-zero exit: docker rmi gcr.io/google-containers/addon-resizer:functional-20220516220221-2444: exit status 1 (1.0911555s)

                                                
                                                
** stderr ** 
	Error: No such image: gcr.io/google-containers/addon-resizer:functional-20220516220221-2444

                                                
                                                
** /stderr **
functional_test.go:416: failed to remove image from docker: exit status 1

                                                
                                                
** stderr ** 
	Error: No such image: gcr.io/google-containers/addon-resizer:functional-20220516220221-2444

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (79.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220516221408-2444 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220516221408-2444 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: exit status 60 (1m19.3255053s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20220516221408-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node ingress-addon-legacy-20220516221408-2444 in cluster ingress-addon-legacy-20220516221408-2444
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* docker "ingress-addon-legacy-20220516221408-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:14:08.727236    1812 out.go:296] Setting OutFile to fd 836 ...
	I0516 22:14:08.795080    1812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:14:08.795080    1812 out.go:309] Setting ErrFile to fd 740...
	I0516 22:14:08.795080    1812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:14:08.805454    1812 out.go:303] Setting JSON to false
	I0516 22:14:08.807436    1812 start.go:115] hostinfo: {"hostname":"minikube2","uptime":2361,"bootTime":1652736887,"procs":144,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:14:08.807436    1812 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:14:08.813270    1812 out.go:177] * [ingress-addon-legacy-20220516221408-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:14:08.817142    1812 notify.go:193] Checking for updates...
	I0516 22:14:08.821840    1812 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:14:08.823955    1812 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:14:08.826472    1812 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:14:08.829615    1812 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:14:08.832325    1812 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:14:11.367418    1812 docker.go:137] docker version: linux-20.10.14
	I0516 22:14:11.376508    1812 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:14:13.382893    1812 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.006374s)
	I0516 22:14:13.383455    1812 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:14:12.3608545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:14:13.388548    1812 out.go:177] * Using the docker driver based on user configuration
	I0516 22:14:13.391938    1812 start.go:284] selected driver: docker
	I0516 22:14:13.391938    1812 start.go:806] validating driver "docker" against <nil>
	I0516 22:14:13.391938    1812 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:14:13.520651    1812 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:14:15.525933    1812 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0050893s)
	I0516 22:14:15.526435    1812 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:14:14.5079614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:14:15.526827    1812 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 22:14:15.527863    1812 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 22:14:15.530712    1812 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 22:14:15.533230    1812 cni.go:95] Creating CNI manager for ""
	I0516 22:14:15.533230    1812 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 22:14:15.533230    1812 start_flags.go:306] config:
	{Name:ingress-addon-legacy-20220516221408-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220516221408-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:14:15.537336    1812 out.go:177] * Starting control plane node ingress-addon-legacy-20220516221408-2444 in cluster ingress-addon-legacy-20220516221408-2444
	I0516 22:14:15.540260    1812 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:14:15.543589    1812 out.go:177] * Pulling base image ...
	I0516 22:14:15.546038    1812 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0516 22:14:15.546038    1812 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:14:15.593972    1812 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0516 22:14:15.594452    1812 cache.go:57] Caching tarball of preloaded images
	I0516 22:14:15.594478    1812 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0516 22:14:15.599842    1812 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0516 22:14:15.602269    1812 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0516 22:14:15.687699    1812 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0516 22:14:16.670095    1812 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:14:16.670304    1812 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:14:16.671549    1812 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:14:16.671549    1812 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:14:16.671549    1812 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:14:16.671549    1812 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:14:16.671549    1812 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:14:16.671549    1812 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:14:16.671549    1812 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:14:19.015244    1812 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-759494883: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-759494883: read-only file system"}
	I0516 22:14:19.015359    1812 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:14:19.254582    1812 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0516 22:14:19.256256    1812 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0516 22:14:20.416683    1812 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0516 22:14:20.416683    1812 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220516221408-2444\config.json ...
	I0516 22:14:20.417983    1812 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220516221408-2444\config.json: {Name:mk019e17a13fc31c9aaca6d47673cee7685ee5da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 22:14:20.418920    1812 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:14:20.419383    1812 start.go:352] acquiring machines lock for ingress-addon-legacy-20220516221408-2444: {Name:mkbd2504ef973e6a87bcf74c13b923c6e81afb91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:14:20.419383    1812 start.go:356] acquired machines lock for "ingress-addon-legacy-20220516221408-2444" in 0s
	I0516 22:14:20.419383    1812 start.go:91] Provisioning new machine with config: &{Name:ingress-addon-legacy-20220516221408-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220516221408-2444 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 22:14:20.419383    1812 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:14:20.426319    1812 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0516 22:14:20.426973    1812 start.go:165] libmachine.API.Create for "ingress-addon-legacy-20220516221408-2444" (driver="docker")
	I0516 22:14:20.427157    1812 client.go:168] LocalClient.Create starting
	I0516 22:14:20.427521    1812 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:14:20.427521    1812 main.go:134] libmachine: Decoding PEM data...
	I0516 22:14:20.427521    1812 main.go:134] libmachine: Parsing certificate...
	I0516 22:14:20.427521    1812 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:14:20.428199    1812 main.go:134] libmachine: Decoding PEM data...
	I0516 22:14:20.428293    1812 main.go:134] libmachine: Parsing certificate...
	I0516 22:14:20.437585    1812 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220516221408-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:14:21.474500    1812 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220516221408-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:14:21.474533    1812 cli_runner.go:217] Completed: docker network inspect ingress-addon-legacy-20220516221408-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0367593s)
	I0516 22:14:21.483706    1812 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220516221408-2444] to gather additional debugging logs...
	I0516 22:14:21.483706    1812 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220516221408-2444
	W0516 22:14:22.537379    1812 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:14:22.537859    1812 cli_runner.go:217] Completed: docker network inspect ingress-addon-legacy-20220516221408-2444: (1.0536673s)
	I0516 22:14:22.537859    1812 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220516221408-2444]: docker network inspect ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:22.537994    1812 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220516221408-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220516221408-2444
	
	** /stderr **
	I0516 22:14:22.546242    1812 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:14:23.601766    1812 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0551364s)
	I0516 22:14:23.623403    1812 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000590118] misses:0}
	I0516 22:14:23.624622    1812 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:14:23.624651    1812 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220516221408-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:14:23.636164    1812 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444
	W0516 22:14:24.648593    1812 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:14:24.648738    1812 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444: (1.0122237s)
	W0516 22:14:24.648818    1812 network_create.go:107] failed to create docker network ingress-addon-legacy-20220516221408-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:14:24.668289    1812 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000590118] amended:false}} dirty:map[] misses:0}
	I0516 22:14:24.668289    1812 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:14:24.693466    1812 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000590118] amended:true}} dirty:map[192.168.49.0:0xc000590118 192.168.58.0:0xc00040c270] misses:0}
	I0516 22:14:24.693528    1812 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:14:24.693528    1812 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220516221408-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:14:24.702222    1812 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444
	W0516 22:14:25.711316    1812 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:14:25.711316    1812 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444: (1.0081103s)
	W0516 22:14:25.711316    1812 network_create.go:107] failed to create docker network ingress-addon-legacy-20220516221408-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:14:25.729308    1812 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000590118] amended:true}} dirty:map[192.168.49.0:0xc000590118 192.168.58.0:0xc00040c270] misses:1}
	I0516 22:14:25.730064    1812 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:14:25.756116    1812 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000590118] amended:true}} dirty:map[192.168.49.0:0xc000590118 192.168.58.0:0xc00040c270 192.168.67.0:0xc000516340] misses:1}
	I0516 22:14:25.756116    1812 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:14:25.756116    1812 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220516221408-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:14:25.765240    1812 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444
	W0516 22:14:26.775378    1812 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:14:26.775549    1812 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444: (1.0100728s)
	W0516 22:14:26.775549    1812 network_create.go:107] failed to create docker network ingress-addon-legacy-20220516221408-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:14:26.793430    1812 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000590118] amended:true}} dirty:map[192.168.49.0:0xc000590118 192.168.58.0:0xc00040c270 192.168.67.0:0xc000516340] misses:2}
	I0516 22:14:26.793430    1812 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:14:26.810409    1812 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000590118] amended:true}} dirty:map[192.168.49.0:0xc000590118 192.168.58.0:0xc00040c270 192.168.67.0:0xc000516340 192.168.76.0:0xc000590238] misses:2}
	I0516 22:14:26.810409    1812 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:14:26.810409    1812 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220516221408-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:14:26.821478    1812 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444
	W0516 22:14:27.839187    1812 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:14:27.839340    1812 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444: (1.0175779s)
	E0516 22:14:27.839419    1812 network_create.go:104] error while trying to create docker network ingress-addon-legacy-20220516221408-2444 192.168.76.0/24: create docker network ingress-addon-legacy-20220516221408-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4953812f3f29d88e13dfb0cb8954e7dfff8d0fb07bdfdf6497d8bc34494b284c (br-4953812f3f29): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:14:27.839445    1812 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network ingress-addon-legacy-20220516221408-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4953812f3f29d88e13dfb0cb8954e7dfff8d0fb07bdfdf6497d8bc34494b284c (br-4953812f3f29): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network ingress-addon-legacy-20220516221408-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4953812f3f29d88e13dfb0cb8954e7dfff8d0fb07bdfdf6497d8bc34494b284c (br-4953812f3f29): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:14:27.857870    1812 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:14:28.903760    1812 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0456951s)
	I0516 22:14:28.911752    1812 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220516221408-2444 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220516221408-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:14:29.968360    1812 cli_runner.go:211] docker volume create ingress-addon-legacy-20220516221408-2444 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220516221408-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:14:29.968360    1812 cli_runner.go:217] Completed: docker volume create ingress-addon-legacy-20220516221408-2444 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220516221408-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0562659s)
	I0516 22:14:29.968360    1812 client.go:171] LocalClient.Create took 9.5411521s
	I0516 22:14:31.983994    1812 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:14:31.991360    1812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444
	W0516 22:14:33.033214    1812 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:14:33.033214    1812 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: (1.0416506s)
	I0516 22:14:33.033214    1812 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:33.318277    1812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444
	W0516 22:14:34.304606    1812 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	W0516 22:14:34.304606    1812 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	
	W0516 22:14:34.304606    1812 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:34.315205    1812 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:14:34.323341    1812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444
	W0516 22:14:35.339574    1812 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:14:35.339574    1812 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: (1.0162271s)
	I0516 22:14:35.339574    1812 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:35.640336    1812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444
	W0516 22:14:36.694399    1812 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:14:36.694399    1812 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: (1.0538384s)
	W0516 22:14:36.694399    1812 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	
	W0516 22:14:36.694399    1812 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:36.694399    1812 start.go:134] duration metric: createHost completed in 16.2749283s
	I0516 22:14:36.694399    1812 start.go:81] releasing machines lock for "ingress-addon-legacy-20220516221408-2444", held for 16.2749283s
	W0516 22:14:36.695014    1812 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220516221408-2444 container: docker volume create ingress-addon-legacy-20220516221408-2444 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220516221408-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220516221408-2444: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220516221408-2444': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220516221408-2444: read-only file system
	I0516 22:14:36.712558    1812 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}
	W0516 22:14:37.724126    1812 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:14:37.724160    1812 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: (1.0114125s)
	I0516 22:14:37.724277    1812 delete.go:82] Unable to get host status for ingress-addon-legacy-20220516221408-2444, assuming it has already been deleted: state: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	W0516 22:14:37.724277    1812 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220516221408-2444 container: docker volume create ingress-addon-legacy-20220516221408-2444 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220516221408-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220516221408-2444: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220516221408-2444': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220516221408-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220516221408-2444 container: docker volume create ingress-addon-legacy-20220516221408-2444 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220516221408-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220516221408-2444: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220516221408-2444': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220516221408-2444: read-only file system
	
	I0516 22:14:37.724277    1812 start.go:623] Will try again in 5 seconds ...
	I0516 22:14:42.724362    1812 start.go:352] acquiring machines lock for ingress-addon-legacy-20220516221408-2444: {Name:mkbd2504ef973e6a87bcf74c13b923c6e81afb91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:14:42.724867    1812 start.go:356] acquired machines lock for "ingress-addon-legacy-20220516221408-2444" in 230.2µs
	I0516 22:14:42.725155    1812 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:14:42.725233    1812 fix.go:55] fixHost starting: 
	I0516 22:14:42.741831    1812 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}
	W0516 22:14:43.739365    1812 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:14:43.739365    1812 fix.go:103] recreateIfNeeded on ingress-addon-legacy-20220516221408-2444: state= err=unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:43.739365    1812 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:14:43.764293    1812 out.go:177] * docker "ingress-addon-legacy-20220516221408-2444" container is missing, will recreate.
	I0516 22:14:43.768043    1812 delete.go:124] DEMOLISHING ingress-addon-legacy-20220516221408-2444 ...
	I0516 22:14:43.783898    1812 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}
	W0516 22:14:44.820540    1812 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:14:44.820540    1812 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: (1.0366391s)
	W0516 22:14:44.820540    1812 stop.go:75] unable to get state: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:44.820540    1812 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:44.845925    1812 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}
	W0516 22:14:45.871996    1812 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:14:45.871996    1812 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: (1.026069s)
	I0516 22:14:45.871996    1812 delete.go:82] Unable to get host status for ingress-addon-legacy-20220516221408-2444, assuming it has already been deleted: state: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:45.879014    1812 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ingress-addon-legacy-20220516221408-2444
	W0516 22:14:46.897126    1812 cli_runner.go:211] docker container inspect -f {{.Id}} ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:14:46.897126    1812 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} ingress-addon-legacy-20220516221408-2444: (1.0178794s)
	I0516 22:14:46.897126    1812 kic.go:356] could not find the container ingress-addon-legacy-20220516221408-2444 to remove it. will try anyways
	I0516 22:14:46.905582    1812 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}
	W0516 22:14:47.928959    1812 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:14:47.929136    1812 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: (1.0233748s)
	W0516 22:14:47.929228    1812 oci.go:84] error getting container status, will try to delete anyways: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:47.938038    1812 cli_runner.go:164] Run: docker exec --privileged -t ingress-addon-legacy-20220516221408-2444 /bin/bash -c "sudo init 0"
	W0516 22:14:48.945009    1812 cli_runner.go:211] docker exec --privileged -t ingress-addon-legacy-20220516221408-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:14:48.945125    1812 cli_runner.go:217] Completed: docker exec --privileged -t ingress-addon-legacy-20220516221408-2444 /bin/bash -c "sudo init 0": (1.0069693s)
	I0516 22:14:48.945186    1812 oci.go:641] error shutdown ingress-addon-legacy-20220516221408-2444: docker exec --privileged -t ingress-addon-legacy-20220516221408-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:49.958246    1812 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}
	W0516 22:14:50.988670    1812 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:14:50.988761    1812 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: (1.0302577s)
	I0516 22:14:50.988855    1812 oci.go:653] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:50.988920    1812 oci.go:655] temporary error: container ingress-addon-legacy-20220516221408-2444 status is  but expect it to be exited
	I0516 22:14:50.988983    1812 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:51.475828    1812 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}
	W0516 22:14:52.496762    1812 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:14:52.496821    1812 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: (1.0208937s)
	I0516 22:14:52.496895    1812 oci.go:653] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:52.496921    1812 oci.go:655] temporary error: container ingress-addon-legacy-20220516221408-2444 status is  but expect it to be exited
	I0516 22:14:52.496921    1812 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:53.402363    1812 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}
	W0516 22:14:54.408284    1812 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:14:54.408360    1812 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: (1.0057133s)
	I0516 22:14:54.408446    1812 oci.go:653] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:54.408495    1812 oci.go:655] temporary error: container ingress-addon-legacy-20220516221408-2444 status is  but expect it to be exited
	I0516 22:14:54.408571    1812 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:55.066580    1812 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}
	W0516 22:14:56.113795    1812 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:14:56.113795    1812 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: (1.0460116s)
	I0516 22:14:56.113795    1812 oci.go:653] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:56.113795    1812 oci.go:655] temporary error: container ingress-addon-legacy-20220516221408-2444 status is  but expect it to be exited
	I0516 22:14:56.113795    1812 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:57.235270    1812 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}
	W0516 22:14:58.258551    1812 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:14:58.258677    1812 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: (1.0231131s)
	I0516 22:14:58.258677    1812 oci.go:653] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:58.258677    1812 oci.go:655] temporary error: container ingress-addon-legacy-20220516221408-2444 status is  but expect it to be exited
	I0516 22:14:58.258677    1812 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:14:59.791497    1812 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}
	W0516 22:15:00.836171    1812 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:15:00.836199    1812 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: (1.0445696s)
	I0516 22:15:00.836479    1812 oci.go:653] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:15:00.836540    1812 oci.go:655] temporary error: container ingress-addon-legacy-20220516221408-2444 status is  but expect it to be exited
	I0516 22:15:00.836540    1812 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:15:03.897464    1812 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}
	W0516 22:15:04.972922    1812 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:15:04.972958    1812 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: (1.0753225s)
	I0516 22:15:04.972958    1812 oci.go:653] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:15:04.972958    1812 oci.go:655] temporary error: container ingress-addon-legacy-20220516221408-2444 status is  but expect it to be exited
	I0516 22:15:04.972958    1812 oci.go:88] couldn't shut down ingress-addon-legacy-20220516221408-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	 
	I0516 22:15:04.982627    1812 cli_runner.go:164] Run: docker rm -f -v ingress-addon-legacy-20220516221408-2444
	I0516 22:15:06.022415    1812 cli_runner.go:217] Completed: docker rm -f -v ingress-addon-legacy-20220516221408-2444: (1.0397821s)
	I0516 22:15:06.031648    1812 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ingress-addon-legacy-20220516221408-2444
	W0516 22:15:07.112121    1812 cli_runner.go:211] docker container inspect -f {{.Id}} ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:15:07.112121    1812 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} ingress-addon-legacy-20220516221408-2444: (1.0804668s)
	I0516 22:15:07.121913    1812 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220516221408-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:15:08.163735    1812 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220516221408-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:15:08.163735    1812 cli_runner.go:217] Completed: docker network inspect ingress-addon-legacy-20220516221408-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0417814s)
	I0516 22:15:08.173077    1812 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220516221408-2444] to gather additional debugging logs...
	I0516 22:15:08.173077    1812 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220516221408-2444
	W0516 22:15:09.231033    1812 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:15:09.231033    1812 cli_runner.go:217] Completed: docker network inspect ingress-addon-legacy-20220516221408-2444: (1.0579504s)
	I0516 22:15:09.231033    1812 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220516221408-2444]: docker network inspect ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220516221408-2444
	I0516 22:15:09.231033    1812 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220516221408-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220516221408-2444
	
	** /stderr **
	W0516 22:15:09.232008    1812 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:15:09.232008    1812 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:15:10.239556    1812 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:15:10.244883    1812 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0516 22:15:10.244883    1812 start.go:165] libmachine.API.Create for "ingress-addon-legacy-20220516221408-2444" (driver="docker")
	I0516 22:15:10.244883    1812 client.go:168] LocalClient.Create starting
	I0516 22:15:10.245834    1812 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:15:10.245946    1812 main.go:134] libmachine: Decoding PEM data...
	I0516 22:15:10.245946    1812 main.go:134] libmachine: Parsing certificate...
	I0516 22:15:10.245946    1812 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:15:10.246685    1812 main.go:134] libmachine: Decoding PEM data...
	I0516 22:15:10.246722    1812 main.go:134] libmachine: Parsing certificate...
	I0516 22:15:10.256537    1812 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220516221408-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:15:11.243577    1812 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220516221408-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:15:11.253469    1812 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220516221408-2444] to gather additional debugging logs...
	I0516 22:15:11.253469    1812 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220516221408-2444
	W0516 22:15:12.263644    1812 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:15:12.263673    1812 cli_runner.go:217] Completed: docker network inspect ingress-addon-legacy-20220516221408-2444: (1.0100552s)
	I0516 22:15:12.263703    1812 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220516221408-2444]: docker network inspect ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220516221408-2444
	I0516 22:15:12.263703    1812 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220516221408-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220516221408-2444
	
	** /stderr **
	I0516 22:15:12.272746    1812 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:15:13.327797    1812 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0548661s)
	I0516 22:15:13.344703    1812 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000590118] amended:true}} dirty:map[192.168.49.0:0xc000590118 192.168.58.0:0xc00040c270 192.168.67.0:0xc000516340 192.168.76.0:0xc000590238] misses:2}
	I0516 22:15:13.344703    1812 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:15:13.359512    1812 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000590118] amended:true}} dirty:map[192.168.49.0:0xc000590118 192.168.58.0:0xc00040c270 192.168.67.0:0xc000516340 192.168.76.0:0xc000590238] misses:3}
	I0516 22:15:13.359512    1812 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:15:13.373774    1812 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000590118 192.168.58.0:0xc00040c270 192.168.67.0:0xc000516340 192.168.76.0:0xc000590238] amended:false}} dirty:map[] misses:0}
	I0516 22:15:13.373774    1812 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:15:13.389952    1812 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000590118 192.168.58.0:0xc00040c270 192.168.67.0:0xc000516340 192.168.76.0:0xc000590238] amended:false}} dirty:map[] misses:0}
	I0516 22:15:13.389952    1812 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:15:13.405708    1812 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000590118 192.168.58.0:0xc00040c270 192.168.67.0:0xc000516340 192.168.76.0:0xc000590238] amended:true}} dirty:map[192.168.49.0:0xc000590118 192.168.58.0:0xc00040c270 192.168.67.0:0xc000516340 192.168.76.0:0xc000590238 192.168.85.0:0xc00014ec80] misses:0}
	I0516 22:15:13.405708    1812 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:15:13.405708    1812 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220516221408-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:15:13.414752    1812 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444
	W0516 22:15:14.432413    1812 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:15:14.432413    1812 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444: (1.0176554s)
	E0516 22:15:14.432413    1812 network_create.go:104] error while trying to create docker network ingress-addon-legacy-20220516221408-2444 192.168.85.0/24: create docker network ingress-addon-legacy-20220516221408-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 40def11ed27ce326f3a965a99e5d567383a5dc7a980fc6bf93c231e43edf4482 (br-40def11ed27c): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:15:14.432413    1812 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network ingress-addon-legacy-20220516221408-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 40def11ed27ce326f3a965a99e5d567383a5dc7a980fc6bf93c231e43edf4482 (br-40def11ed27c): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network ingress-addon-legacy-20220516221408-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 40def11ed27ce326f3a965a99e5d567383a5dc7a980fc6bf93c231e43edf4482 (br-40def11ed27c): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:15:14.448977    1812 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:15:15.466045    1812 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0167341s)
	I0516 22:15:15.474684    1812 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220516221408-2444 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220516221408-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:15:16.497362    1812 cli_runner.go:211] docker volume create ingress-addon-legacy-20220516221408-2444 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220516221408-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:15:16.497513    1812 cli_runner.go:217] Completed: docker volume create ingress-addon-legacy-20220516221408-2444 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220516221408-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0226733s)
	I0516 22:15:16.497547    1812 client.go:171] LocalClient.Create took 6.2526287s
	I0516 22:15:18.517964    1812 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:15:18.524722    1812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444
	W0516 22:15:19.559199    1812 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:15:19.559199    1812 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: (1.0344714s)
	I0516 22:15:19.559199    1812 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:15:19.898233    1812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444
	W0516 22:15:20.895393    1812 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	W0516 22:15:20.895393    1812 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	
	W0516 22:15:20.895393    1812 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:15:20.907200    1812 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:15:20.913458    1812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444
	W0516 22:15:21.933786    1812 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:15:21.933786    1812 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: (1.0203224s)
	I0516 22:15:21.933786    1812 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:15:22.166323    1812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444
	W0516 22:15:23.191599    1812 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:15:23.191599    1812 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: (1.0252698s)
	W0516 22:15:23.191599    1812 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	
	W0516 22:15:23.191599    1812 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:15:23.191599    1812 start.go:134] duration metric: createHost completed in 12.9517417s
	I0516 22:15:23.203408    1812 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:15:23.211385    1812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444
	W0516 22:15:24.222898    1812 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:15:24.222898    1812 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: (1.0115075s)
	I0516 22:15:24.222898    1812 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:15:24.474377    1812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444
	W0516 22:15:25.483097    1812 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:15:25.483139    1812 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: (1.0085126s)
	W0516 22:15:25.483436    1812 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	
	W0516 22:15:25.483436    1812 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:15:25.501203    1812 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:15:25.508222    1812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444
	W0516 22:15:26.534618    1812 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:15:26.534774    1812 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: (1.0262539s)
	I0516 22:15:26.534916    1812 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:15:26.753910    1812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444
	W0516 22:15:27.762973    1812 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444 returned with exit code 1
	I0516 22:15:27.762973    1812 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: (1.0080473s)
	W0516 22:15:27.762973    1812 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	
	W0516 22:15:27.762973    1812 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220516221408-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220516221408-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	I0516 22:15:27.762973    1812 fix.go:57] fixHost completed within 45.0375319s
	I0516 22:15:27.762973    1812 start.go:81] releasing machines lock for "ingress-addon-legacy-20220516221408-2444", held for 45.0378207s
	W0516 22:15:27.763728    1812 out.go:239] * Failed to start docker container. Running "minikube delete -p ingress-addon-legacy-20220516221408-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220516221408-2444 container: docker volume create ingress-addon-legacy-20220516221408-2444 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220516221408-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220516221408-2444: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220516221408-2444': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220516221408-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p ingress-addon-legacy-20220516221408-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220516221408-2444 container: docker volume create ingress-addon-legacy-20220516221408-2444 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220516221408-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220516221408-2444: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220516221408-2444': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220516221408-2444: read-only file system
	
	I0516 22:15:27.772142    1812 out.go:177] 
	W0516 22:15:27.774293    1812 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220516221408-2444 container: docker volume create ingress-addon-legacy-20220516221408-2444 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220516221408-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220516221408-2444: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220516221408-2444': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220516221408-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220516221408-2444 container: docker volume create ingress-addon-legacy-20220516221408-2444 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220516221408-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220516221408-2444: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220516221408-2444': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220516221408-2444: read-only file system
	
	W0516 22:15:27.774293    1812 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 22:15:27.774293    1812 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 22:15:27.777570    1812 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220516221408-2444 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker" : exit status 60
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (79.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (7.04s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220516221408-2444 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220516221408-2444 addons enable ingress --alsologtostderr -v=5: exit status 10 (3.0800077s)

                                                
                                                
-- stdout --
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:15:28.161162    6360 out.go:296] Setting OutFile to fd 668 ...
	I0516 22:15:28.233441    6360 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:15:28.233441    6360 out.go:309] Setting ErrFile to fd 924...
	I0516 22:15:28.233441    6360 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:15:28.246055    6360 config.go:178] Loaded profile config "ingress-addon-legacy-20220516221408-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0516 22:15:28.246055    6360 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20220516221408-2444"
	I0516 22:15:28.246055    6360 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20220516221408-2444"
	I0516 22:15:28.247788    6360 host.go:66] Checking if "ingress-addon-legacy-20220516221408-2444" exists ...
	I0516 22:15:28.262768    6360 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}
	W0516 22:15:30.677234    6360 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:15:30.677234    6360 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: (2.4141921s)
	W0516 22:15:30.677234    6360 host.go:54] host status for "ingress-addon-legacy-20220516221408-2444" returned error: state: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444
	W0516 22:15:30.677234    6360 addons.go:202] "ingress-addon-legacy-20220516221408-2444" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0516 22:15:30.677234    6360 addons.go:386] Verifying addon ingress=true in "ingress-addon-legacy-20220516221408-2444"
	I0516 22:15:30.681296    6360 out.go:177] * Verifying ingress addon...
	W0516 22:15:30.683617    6360 loader.go:221] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:15:30.686807    6360 out.go:177] 
	W0516 22:15:30.689061    6360 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220516221408-2444" does not exist: client config: context "ingress-addon-legacy-20220516221408-2444" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220516221408-2444" does not exist: client config: context "ingress-addon-legacy-20220516221408-2444" does not exist]
	W0516 22:15:30.689061    6360 out.go:239] * 
	* 
	W0516 22:15:30.936653    6360 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_addons_765a40db962dd8139438d8c956b5e6e825316d2d_5.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_addons_765a40db962dd8139438d8c956b5e6e825316d2d_5.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0516 22:15:30.941710    6360 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220516221408-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect ingress-addon-legacy-20220516221408-2444: exit status 1 (1.1277202s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: ingress-addon-legacy-20220516221408-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20220516221408-2444 -n ingress-addon-legacy-20220516221408-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20220516221408-2444 -n ingress-addon-legacy-20220516221408-2444: exit status 7 (2.8264794s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:15:34.909074    6376 status.go:247] status error: host: state: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220516221408-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (7.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (3.83s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:156: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220516221408-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect ingress-addon-legacy-20220516221408-2444: exit status 1 (1.0690297s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: ingress-addon-legacy-20220516221408-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20220516221408-2444 -n ingress-addon-legacy-20220516221408-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20220516221408-2444 -n ingress-addon-legacy-20220516221408-2444: exit status 7 (2.7493859s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:15:41.535716    7008 status.go:247] status error: host: state: unknown state "ingress-addon-legacy-20220516221408-2444": docker container inspect ingress-addon-legacy-20220516221408-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220516221408-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220516221408-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (3.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-20220516221549-2444 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-20220516221549-2444 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: exit status 60 (1m17.8139005s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"de2c5afc-4790-4150-9028-4b4f6167d02d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-20220516221549-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5cd61ead-6081-4c75-99fc-837032f1c899","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"ca2b4c6d-76bb-4aa5-9b74-8c5dea18fd78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"c14f241b-7e1d-4484-ac5a-693ab9eca7cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"62d200cd-af15-4074-967c-775047cba256","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5903f189-bf1e-4b32-b19d-115d6972f43c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c6281c29-6ed1-4f2d-9c7b-72e4c0d595a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with the root privilege"}}
	{"specversion":"1.0","id":"0b794b02-7b2b-4517-a20a-a388e88696c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node json-output-20220516221549-2444 in cluster json-output-20220516221549-2444","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"44794c2b-19c8-419f-b65a-d69c5e84193d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba269314-5df7-482c-a49d-358d60e34aef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2200MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2730106a-42b9-4780-96a9-61b4c25c4acd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220516221549-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220516221549-2444: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 79e0b8080047d72b52611c73b2129f7d492d754d96f12aaff55e69eaa9da07e9 (br-79e0b8080047): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4"}}
	{"specversion":"1.0","id":"12d22782-e118-49a8-be54-929292161cd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for json-output-20220516221549-2444 container: docker volume create json-output-20220516221549-2444 --label name.minikube.sigs.k8s.io=json-output-20220516221549-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220516221549-2444: error while creating volume root path '/var/lib/docker/volumes/json-output-20220516221549-2444': mkdir /var/lib/docker/volumes/json-output-20220516221549-2444: read-only file system"}}
	{"specversion":"1.0","id":"7ed5ec81-fb74-413d-aaf9-bcfd5760d3da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"docker \"json-output-20220516221549-2444\" container is missing, will recreate.","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b8e835d-8707-4bef-9bf7-e2395f07f668","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2200MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8647f463-ca83-41c3-8754-510fb726303c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220516221549-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220516221549-2444: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network bd742cdd16b6610ebff1499e8fd8281d7fcca458c8e3d5b8e4013ab1b4619932 (br-bd742cdd16b6): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4"}}
	{"specversion":"1.0","id":"5cd22c06-326e-4331-bb8e-e2613326a7d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start docker container. Running \"minikube delete -p json-output-20220516221549-2444\" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220516221549-2444 container: docker volume create json-output-20220516221549-2444 --label name.minikube.sigs.k8s.io=json-output-20220516221549-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220516221549-2444: error while creating volume root path '/var/lib/docker/volumes/json-output-20220516221549-2444': mkdir /var/lib/docker/volumes/json-output-20220516221549-2444: read-only file system"}}
	{"specversion":"1.0","id":"bd9ee111-db31-4350-a1e8-66b808f32f60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Restart Docker","exitcode":"60","issues":"https://github.com/kubernetes/minikube/issues/6825","message":"Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220516221549-2444 container: docker volume create json-output-20220516221549-2444 --label name.minikube.sigs.k8s.io=json-output-20220516221549-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220516221549-2444: error while creating volume root path '/var/lib/docker/volumes/json-output-20220516221549-2444': mkdir /var/lib/docker/volumes/json-output-20220516221549-2444: read-only file system","name":"PR_DOCKER_READONLY_VOL","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:16:07.361124    7968 network_create.go:104] error while trying to create docker network json-output-20220516221549-2444 192.168.76.0/24: create docker network json-output-20220516221549-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220516221549-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 79e0b8080047d72b52611c73b2129f7d492d754d96f12aaff55e69eaa9da07e9 (br-79e0b8080047): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	E0516 22:16:53.817779    7968 network_create.go:104] error while trying to create docker network json-output-20220516221549-2444 192.168.85.0/24: create docker network json-output-20220516221549-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220516221549-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bd742cdd16b6610ebff1499e8fd8281d7fcca458c8e3d5b8e4013ab1b4619932 (br-bd742cdd16b6): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe start -p json-output-20220516221549-2444 --output=json --user=testUser --memory=2200 --wait=true --driver=docker": exit status 60
--- FAIL: TestJSONOutput/start/Command (77.82s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 8 has already been assigned to another step:
Creating docker container (CPUs=2, Memory=2200MB) ...
Cannot use for:
docker "json-output-20220516221549-2444" container is missing, will recreate.
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: de2c5afc-4790-4150-9028-4b4f6167d02d
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-20220516221549-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 5cd61ead-6081-4c75-99fc-837032f1c899
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: ca2b4c6d-76bb-4aa5-9b74-8c5dea18fd78
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: c14f241b-7e1d-4484-ac5a-693ab9eca7cf
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=12739"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 62d200cd-af15-4074-967c-775047cba256
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 5903f189-bf1e-4b32-b19d-115d6972f43c
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: c6281c29-6ed1-4f2d-9c7b-72e4c0d595a0
datacontenttype: application/json
Data,
{
"message": "Using Docker Desktop driver with the root privilege"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0b794b02-7b2b-4517-a20a-a388e88696c1
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting control plane node json-output-20220516221549-2444 in cluster json-output-20220516221549-2444",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 44794c2b-19c8-419f-b65a-d69c5e84193d
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ba269314-5df7-482c-a49d-358d60e34aef
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: 2730106a-42b9-4780-96a9-61b4c25c4acd
datacontenttype: application/json
Data,
{
"message": "Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220516221549-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220516221549-2444: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 79e0b8080047d72b52611c73b2129f7d492d754d96f12aaff55e69eaa9da07e9 (br-79e0b8080047): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 12d22782-e118-49a8-be54-929292161cd8
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for json-output-20220516221549-2444 container: docker volume create json-output-20220516221549-2444 --label name.minikube.sigs.k8s.io=json-output-20220516221549-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220516221549-2444: error while creating volume root path '/var/lib/docker/volumes/json-output-20220516221549-2444': mkdir /var/lib/docker/volumes/json-output-20220516221549-2444: read-only file system"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7ed5ec81-fb74-413d-aaf9-bcfd5760d3da
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "docker \"json-output-20220516221549-2444\" container is missing, will recreate.",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0b8e835d-8707-4bef-9bf7-e2395f07f668
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: 8647f463-ca83-41c3-8754-510fb726303c
datacontenttype: application/json
Data,
{
"message": "Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220516221549-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220516221549-2444: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network bd742cdd16b6610ebff1499e8fd8281d7fcca458c8e3d5b8e4013ab1b4619932 (br-bd742cdd16b6): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 5cd22c06-326e-4331-bb8e-e2613326a7d7
datacontenttype: application/json
Data,
{
"message": "Failed to start docker container. Running \"minikube delete -p json-output-20220516221549-2444\" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220516221549-2444 container: docker volume create json-output-20220516221549-2444 --label name.minikube.sigs.k8s.io=json-output-20220516221549-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220516221549-2444: error while creating volume root path '/var/lib/docker/volumes/json-output-20220516221549-2444': mkdir /var/lib/docker/volumes/json-output-20220516221549-2444: read-only file system"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: bd9ee111-db31-4350-a1e8-66b808f32f60
datacontenttype: application/json
Data,
{
"advice": "Restart Docker",
"exitcode": "60",
"issues": "https://github.com/kubernetes/minikube/issues/6825",
"message": "Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220516221549-2444 container: docker volume create json-output-20220516221549-2444 --label name.minikube.sigs.k8s.io=json-output-20220516221549-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220516221549-2444: error while creating volume root path '/var/lib/docker/volumes/json-output-20220516221549-2444': mkdir /var/lib/docker/volumes/json-output-20220516221549-2444: read-only file system",
"name": "PR_DOCKER_READONLY_VOL",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:133: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: de2c5afc-4790-4150-9028-4b4f6167d02d
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-20220516221549-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 5cd61ead-6081-4c75-99fc-837032f1c899
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: ca2b4c6d-76bb-4aa5-9b74-8c5dea18fd78
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: c14f241b-7e1d-4484-ac5a-693ab9eca7cf
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=12739"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 62d200cd-af15-4074-967c-775047cba256
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 5903f189-bf1e-4b32-b19d-115d6972f43c
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: c6281c29-6ed1-4f2d-9c7b-72e4c0d595a0
datacontenttype: application/json
Data,
{
"message": "Using Docker Desktop driver with the root privilege"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0b794b02-7b2b-4517-a20a-a388e88696c1
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting control plane node json-output-20220516221549-2444 in cluster json-output-20220516221549-2444",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 44794c2b-19c8-419f-b65a-d69c5e84193d
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ba269314-5df7-482c-a49d-358d60e34aef
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: 2730106a-42b9-4780-96a9-61b4c25c4acd
datacontenttype: application/json
Data,
{
"message": "Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220516221549-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220516221549-2444: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 79e0b8080047d72b52611c73b2129f7d492d754d96f12aaff55e69eaa9da07e9 (br-79e0b8080047): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 12d22782-e118-49a8-be54-929292161cd8
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for json-output-20220516221549-2444 container: docker volume create json-output-20220516221549-2444 --label name.minikube.sigs.k8s.io=json-output-20220516221549-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220516221549-2444: error while creating volume root path '/var/lib/docker/volumes/json-output-20220516221549-2444': mkdir /var/lib/docker/volumes/json-output-20220516221549-2444: read-only file system"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7ed5ec81-fb74-413d-aaf9-bcfd5760d3da
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "docker \"json-output-20220516221549-2444\" container is missing, will recreate.",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0b8e835d-8707-4bef-9bf7-e2395f07f668
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: 8647f463-ca83-41c3-8754-510fb726303c
datacontenttype: application/json
Data,
{
"message": "Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220516221549-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220516221549-2444: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network bd742cdd16b6610ebff1499e8fd8281d7fcca458c8e3d5b8e4013ab1b4619932 (br-bd742cdd16b6): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 5cd22c06-326e-4331-bb8e-e2613326a7d7
datacontenttype: application/json
Data,
{
"message": "Failed to start docker container. Running \"minikube delete -p json-output-20220516221549-2444\" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220516221549-2444 container: docker volume create json-output-20220516221549-2444 --label name.minikube.sigs.k8s.io=json-output-20220516221549-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220516221549-2444: error while creating volume root path '/var/lib/docker/volumes/json-output-20220516221549-2444': mkdir /var/lib/docker/volumes/json-output-20220516221549-2444: read-only file system"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: bd9ee111-db31-4350-a1e8-66b808f32f60
datacontenttype: application/json
Data,
{
"advice": "Restart Docker",
"exitcode": "60",
"issues": "https://github.com/kubernetes/minikube/issues/6825",
"message": "Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220516221549-2444 container: docker volume create json-output-20220516221549-2444 --label name.minikube.sigs.k8s.io=json-output-20220516221549-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220516221549-2444: error while creating volume root path '/var/lib/docker/volumes/json-output-20220516221549-2444': mkdir /var/lib/docker/volumes/json-output-20220516221549-2444: read-only file system",
"name": "PR_DOCKER_READONLY_VOL",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.01s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (3.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-20220516221549-2444 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p json-output-20220516221549-2444 --output=json --user=testUser: exit status 80 (3.093914s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8ba66523-6391-43de-a8bf-628c2615421d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"state: unknown state \"json-output-20220516221549-2444\": docker container inspect json-output-20220516221549-2444 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20220516221549-2444","name":"GUEST_STATUS","url":""}}
	{"specversion":"1.0","id":"31d9c7b7-5d31-4c4f-b346-53c7f21dce2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                      │\n│    If the above advice does not help, please let us know:                                                            │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                          │\n│
│\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │\n│    Please also attach the following file to the GitHub issue:                                                        │\n│    - C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_10.log    │\n│                                                                                                                      │\n╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe pause -p json-output-20220516221549-2444 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (3.09s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (3.03s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-20220516221549-2444 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe unpause -p json-output-20220516221549-2444 --output=json --user=testUser: exit status 80 (3.0314463s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "json-output-20220516221549-2444": docker container inspect json-output-20220516221549-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: json-output-20220516221549-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_unpause_00b12d9cedab4ae1bb930a621bdee2ada68dbd98_8.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe unpause -p json-output-20220516221549-2444 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (3.03s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (22.04s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-20220516221549-2444 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p json-output-20220516221549-2444 --output=json --user=testUser: exit status 82 (22.0431683s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a12ec670-1f99-4739-9ccc-bc9dac194f5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220516221549-2444\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"45b8424a-4305-4687-88cd-f592f8c0ddd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220516221549-2444\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"7e5ebb0b-4748-4e05-a522-cd71700c71a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220516221549-2444\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"32c8e404-8001-4e88-a700-2a796cdc3033","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220516221549-2444\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"312c6440-96f2-478d-86b5-40598e1677d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220516221549-2444\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"88099230-c874-4055-a35f-3a9b440459c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220516221549-2444\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"f0316a0f-3593-443a-bf31-0f60956af498","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"82","issues":"","message":"docker container inspect json-output-20220516221549-2444 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20220516221549-2444","name":"GUEST_STOP_TIMEOUT","url":""}}
	{"specversion":"1.0","id":"b071994d-fa31-470b-a03e-8e009bcbc9b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                     │\n│    If the above advice does not help, please let us know:                                                           │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                         │\n│
│\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │\n│    Please also attach the following file to the GitHub issue:                                                       │\n│    - C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_39.log    │\n│                                                                                                                     │\n╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:17:18.737976    8784 daemonize_windows.go:38] error terminating scheduled stop for profile json-output-20220516221549-2444: stopping schedule-stop service for profile json-output-20220516221549-2444: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "json-output-20220516221549-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" json-output-20220516221549-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: json-output-20220516221549-2444

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe stop -p json-output-20220516221549-2444 --output=json --user=testUser": exit status 82
--- FAIL: TestJSONOutput/stop/Command (22.04s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
json_output_test.go:114: step 0 has already been assigned to another step:
Stopping node "json-output-20220516221549-2444"  ...
Cannot use for:
Stopping node "json-output-20220516221549-2444"  ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a12ec670-1f99-4739-9ccc-bc9dac194f5e
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220516221549-2444\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 45b8424a-4305-4687-88cd-f592f8c0ddd9
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220516221549-2444\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7e5ebb0b-4748-4e05-a522-cd71700c71a6
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220516221549-2444\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 32c8e404-8001-4e88-a700-2a796cdc3033
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220516221549-2444\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 312c6440-96f2-478d-86b5-40598e1677d2
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220516221549-2444\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 88099230-c874-4055-a35f-3a9b440459c9
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220516221549-2444\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: f0316a0f-3593-443a-bf31-0f60956af498
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "82",
"issues": "",
"message": "docker container inspect json-output-20220516221549-2444 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20220516221549-2444",
"name": "GUEST_STOP_TIMEOUT",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: b071994d-fa31-470b-a03e-8e009bcbc9b0
datacontenttype: application/json
Data,
{
"message": "╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                     │\n│    If the above advice does not help, please let us know:                                                           │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                         │\n│                                                                                                                     │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │\n│    Please al
so attach the following file to the GitHub issue:                                                       │\n│    - C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_39.log    │\n│                                                                                                                     │\n╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.01s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
json_output_test.go:133: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a12ec670-1f99-4739-9ccc-bc9dac194f5e
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220516221549-2444\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 45b8424a-4305-4687-88cd-f592f8c0ddd9
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220516221549-2444\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7e5ebb0b-4748-4e05-a522-cd71700c71a6
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220516221549-2444\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 32c8e404-8001-4e88-a700-2a796cdc3033
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220516221549-2444\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 312c6440-96f2-478d-86b5-40598e1677d2
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220516221549-2444\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 88099230-c874-4055-a35f-3a9b440459c9
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220516221549-2444\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: f0316a0f-3593-443a-bf31-0f60956af498
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "82",
"issues": "",
"message": "docker container inspect json-output-20220516221549-2444 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20220516221549-2444",
"name": "GUEST_STOP_TIMEOUT",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: b071994d-fa31-470b-a03e-8e009bcbc9b0
datacontenttype: application/json
Data,
{
"message": "╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                     │\n│    If the above advice does not help, please let us know:                                                           │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                         │\n│                                                                                                                     │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │\n│    Please al
so attach the following file to the GitHub issue:                                                       │\n│    - C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_39.log    │\n│                                                                                                                     │\n╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.01s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (244.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220516221751-2444 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220516221751-2444 --network=: (3m23.1859218s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0656578s)
kic_custom_network_test.go:127: docker-network-20220516221751-2444 network is not listed by [[docker network ls --format {{.Name}}]]: 
-- stdout --
	bridge
	host
	none

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "docker-network-20220516221751-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220516221751-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220516221751-2444: (39.8148203s)
--- FAIL: TestKicCustomNetwork/create_custom_network (244.08s)

                                                
                                    
x
+
TestKicExistingNetwork (7.33s)

                                                
                                                
=== RUN   TestKicExistingNetwork
E0516 22:25:49.528849    2444 network_create.go:104] error while trying to create docker network existing-network 192.168.76.0/24: create docker network existing-network 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 2e4ece31b2fa9bb8fa842569928917cc88b1e4727ef7c8a0e099be95762fd1aa (br-2e4ece31b2fa): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
kic_custom_network_test.go:78: error creating network: un-retryable: create docker network existing-network 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 2e4ece31b2fa9bb8fa842569928917cc88b1e4727ef7c8a0e099be95762fd1aa (br-2e4ece31b2fa): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
--- FAIL: TestKicExistingNetwork (7.33s)

                                                
                                    
x
+
TestKicCustomSubnet (234.95s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-20220516222549-2444 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-20220516222549-2444 --subnet=192.168.60.0/24: (3m15.4431729s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220516222549-2444 --format "{{(index .IPAM.Config 0).Subnet}}"
kic_custom_network_test.go:133: (dbg) Non-zero exit: docker network inspect custom-subnet-20220516222549-2444 --format "{{(index .IPAM.Config 0).Subnet}}": exit status 1 (1.0458078s)

                                                
                                                
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such network: custom-subnet-20220516222549-2444

                                                
                                                
** /stderr **
kic_custom_network_test.go:135: docker network inspect custom-subnet-20220516222549-2444 --format "{{(index .IPAM.Config 0).Subnet}}" failed: exit status 1

                                                
                                                
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such network: custom-subnet-20220516222549-2444

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "custom-subnet-20220516222549-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-20220516222549-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-20220516222549-2444: (38.4435359s)
--- FAIL: TestKicCustomSubnet (234.95s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (81.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-20220516222944-2444 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-1-20220516222944-2444 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: exit status 60 (1m17.6650648s)

                                                
                                                
-- stdout --
	* [mount-start-1-20220516222944-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting minikube without Kubernetes mount-start-1-20220516222944-2444 in cluster mount-start-1-20220516222944-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "mount-start-1-20220516222944-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:30:02.341531    8904 network_create.go:104] error while trying to create docker network mount-start-1-20220516222944-2444 192.168.76.0/24: create docker network mount-start-1-20220516222944-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true mount-start-1-20220516222944-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dd92a4cf4412f5c86103a02f13009ac7ef2c7652ec483d9ef5697778a8ea59a1 (br-dd92a4cf4412): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network mount-start-1-20220516222944-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true mount-start-1-20220516222944-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dd92a4cf4412f5c86103a02f13009ac7ef2c7652ec483d9ef5697778a8ea59a1 (br-dd92a4cf4412): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for mount-start-1-20220516222944-2444 container: docker volume create mount-start-1-20220516222944-2444 --label name.minikube.sigs.k8s.io=mount-start-1-20220516222944-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create mount-start-1-20220516222944-2444: error while creating volume root path '/var/lib/docker/volumes/mount-start-1-20220516222944-2444': mkdir /var/lib/docker/volumes/mount-start-1-20220516222944-2444: read-only file system
	
	E0516 22:30:48.816904    8904 network_create.go:104] error while trying to create docker network mount-start-1-20220516222944-2444 192.168.85.0/24: create docker network mount-start-1-20220516222944-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true mount-start-1-20220516222944-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 176adf6893736a5345390536cf8bb262c70c6f20f12d14e2259c91711701d057 (br-176adf689373): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network mount-start-1-20220516222944-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true mount-start-1-20220516222944-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 176adf6893736a5345390536cf8bb262c70c6f20f12d14e2259c91711701d057 (br-176adf689373): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p mount-start-1-20220516222944-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for mount-start-1-20220516222944-2444 container: docker volume create mount-start-1-20220516222944-2444 --label name.minikube.sigs.k8s.io=mount-start-1-20220516222944-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create mount-start-1-20220516222944-2444: error while creating volume root path '/var/lib/docker/volumes/mount-start-1-20220516222944-2444': mkdir /var/lib/docker/volumes/mount-start-1-20220516222944-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for mount-start-1-20220516222944-2444 container: docker volume create mount-start-1-20220516222944-2444 --label name.minikube.sigs.k8s.io=mount-start-1-20220516222944-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create mount-start-1-20220516222944-2444: error while creating volume root path '/var/lib/docker/volumes/mount-start-1-20220516222944-2444': mkdir /var/lib/docker/volumes/mount-start-1-20220516222944-2444: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p mount-start-1-20220516222944-2444 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/StartWithMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-20220516222944-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect mount-start-1-20220516222944-2444: exit status 1 (1.1182735s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: mount-start-1-20220516222944-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-1-20220516222944-2444 -n mount-start-1-20220516222944-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-1-20220516222944-2444 -n mount-start-1-20220516222944-2444: exit status 7 (2.790738s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:31:06.367359    8544 status.go:247] status error: host: state: unknown state "mount-start-1-20220516222944-2444": docker container inspect mount-start-1-20220516222944-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-1-20220516222944-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-20220516222944-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/StartWithMountFirst (81.59s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (81.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220516223121-2444 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
multinode_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220516223121-2444 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: exit status 60 (1m17.5382699s)

                                                
                                                
-- stdout --
	* [multinode-20220516223121-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node multinode-20220516223121-2444 in cluster multinode-20220516223121-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20220516223121-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:31:21.501725    8228 out.go:296] Setting OutFile to fd 804 ...
	I0516 22:31:21.557963    8228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:31:21.557963    8228 out.go:309] Setting ErrFile to fd 832...
	I0516 22:31:21.558553    8228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:31:21.568813    8228 out.go:303] Setting JSON to false
	I0516 22:31:21.571517    8228 start.go:115] hostinfo: {"hostname":"minikube2","uptime":3393,"bootTime":1652736888,"procs":146,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:31:21.571517    8228 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:31:21.576578    8228 out.go:177] * [multinode-20220516223121-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:31:21.579162    8228 notify.go:193] Checking for updates...
	I0516 22:31:21.583157    8228 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:31:21.585915    8228 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:31:21.588504    8228 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:31:21.593412    8228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:31:21.596489    8228 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:31:24.110790    8228 docker.go:137] docker version: linux-20.10.14
	I0516 22:31:24.118384    8228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:31:26.099394    8228 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9809963s)
	I0516 22:31:26.100480    8228 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:31:25.0942685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:31:26.104412    8228 out.go:177] * Using the docker driver based on user configuration
	I0516 22:31:26.107558    8228 start.go:284] selected driver: docker
	I0516 22:31:26.107558    8228 start.go:806] validating driver "docker" against <nil>
	I0516 22:31:26.107558    8228 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:31:26.237480    8228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:31:28.221973    8228 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9844794s)
	I0516 22:31:28.221973    8228 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:31:27.206652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:31:28.221973    8228 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 22:31:28.223091    8228 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 22:31:28.227316    8228 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 22:31:28.229886    8228 cni.go:95] Creating CNI manager for ""
	I0516 22:31:28.229938    8228 cni.go:156] 0 nodes found, recommending kindnet
	I0516 22:31:28.230001    8228 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0516 22:31:28.230001    8228 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0516 22:31:28.230001    8228 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0516 22:31:28.230052    8228 start_flags.go:306] config:
	{Name:multinode-20220516223121-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220516223121-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:31:28.233257    8228 out.go:177] * Starting control plane node multinode-20220516223121-2444 in cluster multinode-20220516223121-2444
	I0516 22:31:28.236332    8228 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:31:28.239302    8228 out.go:177] * Pulling base image ...
	I0516 22:31:28.240942    8228 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 22:31:28.240942    8228 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:31:28.240942    8228 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 22:31:28.240942    8228 cache.go:57] Caching tarball of preloaded images
	I0516 22:31:28.241952    8228 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 22:31:28.241952    8228 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 22:31:28.241952    8228 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20220516223121-2444\config.json ...
	I0516 22:31:28.241952    8228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20220516223121-2444\config.json: {Name:mkc62049ad1bf57c4a1885ce365fa3bd8613af88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 22:31:29.253130    8228 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:31:29.253460    8228 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:31:29.253557    8228 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:31:29.253557    8228 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:31:29.253557    8228 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:31:29.253557    8228 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:31:29.254155    8228 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:31:29.254210    8228 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:31:29.254253    8228 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:31:31.464058    8228 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-332692113: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-332692113: read-only file system"}
	I0516 22:31:31.464100    8228 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:31:31.464179    8228 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:31:31.464343    8228 start.go:352] acquiring machines lock for multinode-20220516223121-2444: {Name:mk85c04f827b76c021a94c8d716dce0669525244 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:31:31.464583    8228 start.go:356] acquired machines lock for "multinode-20220516223121-2444" in 110µs
	I0516 22:31:31.464822    8228 start.go:91] Provisioning new machine with config: &{Name:multinode-20220516223121-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220516223121-2444 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 22:31:31.465013    8228 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:31:31.469104    8228 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:31:31.469187    8228 start.go:165] libmachine.API.Create for "multinode-20220516223121-2444" (driver="docker")
	I0516 22:31:31.469187    8228 client.go:168] LocalClient.Create starting
	I0516 22:31:31.470206    8228 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:31:31.470539    8228 main.go:134] libmachine: Decoding PEM data...
	I0516 22:31:31.470585    8228 main.go:134] libmachine: Parsing certificate...
	I0516 22:31:31.470761    8228 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:31:31.470761    8228 main.go:134] libmachine: Decoding PEM data...
	I0516 22:31:31.470761    8228 main.go:134] libmachine: Parsing certificate...
	I0516 22:31:31.481192    8228 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:31:32.492707    8228 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:31:32.492707    8228 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0114214s)
	I0516 22:31:32.504833    8228 network_create.go:272] running [docker network inspect multinode-20220516223121-2444] to gather additional debugging logs...
	I0516 22:31:32.504833    8228 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444
	W0516 22:31:33.547865    8228 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 returned with exit code 1
	I0516 22:31:33.547865    8228 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444: (1.0430245s)
	I0516 22:31:33.547865    8228 network_create.go:275] error running [docker network inspect multinode-20220516223121-2444]: docker network inspect multinode-20220516223121-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220516223121-2444
	I0516 22:31:33.547865    8228 network_create.go:277] output of [docker network inspect multinode-20220516223121-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220516223121-2444
	
	** /stderr **
	I0516 22:31:33.557706    8228 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:31:34.601471    8228 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0435908s)
	I0516 22:31:34.624146    8228 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000354160] misses:0}
	I0516 22:31:34.624146    8228 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:31:34.624146    8228 network_create.go:115] attempt to create docker network multinode-20220516223121-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:31:34.634042    8228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444
	W0516 22:31:35.647483    8228 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444 returned with exit code 1
	I0516 22:31:35.647588    8228 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: (1.0133785s)
	W0516 22:31:35.647664    8228 network_create.go:107] failed to create docker network multinode-20220516223121-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:31:35.667698    8228 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000354160] amended:false}} dirty:map[] misses:0}
	I0516 22:31:35.667698    8228 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:31:35.686884    8228 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000354160] amended:true}} dirty:map[192.168.49.0:0xc000354160 192.168.58.0:0xc000724148] misses:0}
	I0516 22:31:35.686969    8228 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:31:35.686969    8228 network_create.go:115] attempt to create docker network multinode-20220516223121-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:31:35.694026    8228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444
	W0516 22:31:36.698138    8228 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444 returned with exit code 1
	I0516 22:31:36.698138    8228 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: (1.0038232s)
	W0516 22:31:36.698138    8228 network_create.go:107] failed to create docker network multinode-20220516223121-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:31:36.727905    8228 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000354160] amended:true}} dirty:map[192.168.49.0:0xc000354160 192.168.58.0:0xc000724148] misses:1}
	I0516 22:31:36.728041    8228 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:31:36.746071    8228 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000354160] amended:true}} dirty:map[192.168.49.0:0xc000354160 192.168.58.0:0xc000724148 192.168.67.0:0xc000724450] misses:1}
	I0516 22:31:36.746071    8228 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:31:36.746071    8228 network_create.go:115] attempt to create docker network multinode-20220516223121-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:31:36.754314    8228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444
	W0516 22:31:37.778611    8228 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444 returned with exit code 1
	I0516 22:31:37.778611    8228 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: (1.0242898s)
	W0516 22:31:37.778611    8228 network_create.go:107] failed to create docker network multinode-20220516223121-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:31:37.795577    8228 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000354160] amended:true}} dirty:map[192.168.49.0:0xc000354160 192.168.58.0:0xc000724148 192.168.67.0:0xc000724450] misses:2}
	I0516 22:31:37.795577    8228 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:31:37.814369    8228 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000354160] amended:true}} dirty:map[192.168.49.0:0xc000354160 192.168.58.0:0xc000724148 192.168.67.0:0xc000724450 192.168.76.0:0xc0005b8670] misses:2}
	I0516 22:31:37.814369    8228 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:31:37.814369    8228 network_create.go:115] attempt to create docker network multinode-20220516223121-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:31:37.823839    8228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444
	W0516 22:31:38.827743    8228 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444 returned with exit code 1
	I0516 22:31:38.827818    8228 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: (1.0038662s)
	E0516 22:31:38.827929    8228 network_create.go:104] error while trying to create docker network multinode-20220516223121-2444 192.168.76.0/24: create docker network multinode-20220516223121-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2f592585e9c6a08077778690db243d7a8a8bd211e57fbfe334cc712febb01e53 (br-2f592585e9c6): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:31:38.827929    8228 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220516223121-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2f592585e9c6a08077778690db243d7a8a8bd211e57fbfe334cc712febb01e53 (br-2f592585e9c6): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220516223121-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2f592585e9c6a08077778690db243d7a8a8bd211e57fbfe334cc712febb01e53 (br-2f592585e9c6): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:31:38.846771    8228 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:31:39.876543    8228 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0295667s)
	I0516 22:31:39.884726    8228 cli_runner.go:164] Run: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:31:40.897604    8228 cli_runner.go:211] docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:31:40.897747    8228 cli_runner.go:217] Completed: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0128715s)
	I0516 22:31:40.897877    8228 client.go:171] LocalClient.Create took 9.4285996s
	I0516 22:31:42.916994    8228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:31:42.924084    8228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:31:43.954159    8228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:31:43.954159    8228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0300688s)
	I0516 22:31:43.954159    8228 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:31:44.253492    8228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:31:45.280825    8228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:31:45.280893    8228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.027298s)
	W0516 22:31:45.281092    8228 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:31:45.281134    8228 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:31:45.291922    8228 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:31:45.298009    8228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:31:46.311373    8228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:31:46.311478    8228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0130326s)
	I0516 22:31:46.311478    8228 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:31:46.614720    8228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:31:47.641174    8228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:31:47.641174    8228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0263213s)
	W0516 22:31:47.641174    8228 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:31:47.641174    8228 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:31:47.641174    8228 start.go:134] duration metric: createHost completed in 16.1760508s
	I0516 22:31:47.641174    8228 start.go:81] releasing machines lock for "multinode-20220516223121-2444", held for 16.1764342s
	W0516 22:31:47.641714    8228 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	I0516 22:31:47.661507    8228 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:31:48.680192    8228 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:31:48.680192    8228 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0184744s)
	I0516 22:31:48.680389    8228 delete.go:82] Unable to get host status for multinode-20220516223121-2444, assuming it has already been deleted: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	W0516 22:31:48.680433    8228 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	I0516 22:31:48.680433    8228 start.go:623] Will try again in 5 seconds ...
	I0516 22:31:53.690107    8228 start.go:352] acquiring machines lock for multinode-20220516223121-2444: {Name:mk85c04f827b76c021a94c8d716dce0669525244 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:31:53.690107    8228 start.go:356] acquired machines lock for "multinode-20220516223121-2444" in 0s
	I0516 22:31:53.690107    8228 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:31:53.690107    8228 fix.go:55] fixHost starting: 
	I0516 22:31:53.706656    8228 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:31:54.744504    8228 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:31:54.744728    8228 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0378411s)
	I0516 22:31:54.744728    8228 fix.go:103] recreateIfNeeded on multinode-20220516223121-2444: state= err=unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:31:54.744728    8228 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:31:54.749150    8228 out.go:177] * docker "multinode-20220516223121-2444" container is missing, will recreate.
	I0516 22:31:54.751476    8228 delete.go:124] DEMOLISHING multinode-20220516223121-2444 ...
	I0516 22:31:54.766202    8228 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:31:55.822737    8228 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:31:55.822737    8228 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.056528s)
	W0516 22:31:55.822737    8228 stop.go:75] unable to get state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:31:55.822737    8228 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:31:55.842043    8228 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:31:56.854913    8228 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:31:56.854913    8228 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0128637s)
	I0516 22:31:56.854913    8228 delete.go:82] Unable to get host status for multinode-20220516223121-2444, assuming it has already been deleted: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:31:56.863440    8228 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220516223121-2444
	W0516 22:31:57.888235    8228 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220516223121-2444 returned with exit code 1
	I0516 22:31:57.888311    8228 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220516223121-2444: (1.0247872s)
	I0516 22:31:57.888356    8228 kic.go:356] could not find the container multinode-20220516223121-2444 to remove it. will try anyways
	I0516 22:31:57.897264    8228 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:31:58.936399    8228 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:31:58.936533    8228 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0389812s)
	W0516 22:31:58.936533    8228 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:31:58.944858    8228 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0"
	W0516 22:31:59.985209    8228 cli_runner.go:211] docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:31:59.985240    8228 cli_runner.go:217] Completed: docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0": (1.0402248s)
	I0516 22:31:59.985321    8228 oci.go:641] error shutdown multinode-20220516223121-2444: docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:01.001994    8228 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:32:02.037652    8228 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:32:02.037652    8228 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0356509s)
	I0516 22:32:02.037652    8228 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:02.037652    8228 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:32:02.037652    8228 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:02.524772    8228 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:32:03.561352    8228 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:32:03.561352    8228 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0365727s)
	I0516 22:32:03.561352    8228 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:03.561352    8228 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:32:03.561352    8228 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:04.472418    8228 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:32:05.482021    8228 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:32:05.482168    8228 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0093472s)
	I0516 22:32:05.482232    8228 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:05.482283    8228 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:32:05.482330    8228 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:06.138226    8228 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:32:07.163054    8228 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:32:07.163232    8228 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0248215s)
	I0516 22:32:07.163276    8228 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:07.163276    8228 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:32:07.163336    8228 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:08.284786    8228 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:32:09.323522    8228 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:32:09.323657    8228 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0375639s)
	I0516 22:32:09.323657    8228 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:09.323657    8228 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:32:09.323657    8228 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:10.844640    8228 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:32:11.869053    8228 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:32:11.869053    8228 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0244069s)
	I0516 22:32:11.869053    8228 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:11.869053    8228 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:32:11.869053    8228 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:14.925983    8228 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:32:15.920501    8228 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:32:15.920738    8228 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:15.920775    8228 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:32:15.920798    8228 oci.go:88] couldn't shut down multinode-20220516223121-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	 
	I0516 22:32:15.929968    8228 cli_runner.go:164] Run: docker rm -f -v multinode-20220516223121-2444
	I0516 22:32:16.956186    8228 cli_runner.go:217] Completed: docker rm -f -v multinode-20220516223121-2444: (1.0259706s)
	I0516 22:32:16.963441    8228 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220516223121-2444
	W0516 22:32:17.972403    8228 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220516223121-2444 returned with exit code 1
	I0516 22:32:17.972403    8228 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220516223121-2444: (1.0087416s)
	I0516 22:32:17.979188    8228 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:32:18.990107    8228 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:32:18.990107    8228 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0109114s)
	I0516 22:32:18.997893    8228 network_create.go:272] running [docker network inspect multinode-20220516223121-2444] to gather additional debugging logs...
	I0516 22:32:18.998004    8228 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444
	W0516 22:32:20.013644    8228 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 returned with exit code 1
	I0516 22:32:20.013644    8228 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444: (1.0156331s)
	I0516 22:32:20.013644    8228 network_create.go:275] error running [docker network inspect multinode-20220516223121-2444]: docker network inspect multinode-20220516223121-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220516223121-2444
	I0516 22:32:20.013644    8228 network_create.go:277] output of [docker network inspect multinode-20220516223121-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220516223121-2444
	
	** /stderr **
	W0516 22:32:20.014966    8228 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:32:20.015178    8228 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:32:21.020815    8228 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:32:21.025433    8228 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:32:21.025948    8228 start.go:165] libmachine.API.Create for "multinode-20220516223121-2444" (driver="docker")
	I0516 22:32:21.026036    8228 client.go:168] LocalClient.Create starting
	I0516 22:32:21.026160    8228 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:32:21.026759    8228 main.go:134] libmachine: Decoding PEM data...
	I0516 22:32:21.026759    8228 main.go:134] libmachine: Parsing certificate...
	I0516 22:32:21.027033    8228 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:32:21.027308    8228 main.go:134] libmachine: Decoding PEM data...
	I0516 22:32:21.027348    8228 main.go:134] libmachine: Parsing certificate...
	I0516 22:32:21.044153    8228 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:32:22.112677    8228 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:32:22.112677    8228 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.06849s)
	I0516 22:32:22.122475    8228 network_create.go:272] running [docker network inspect multinode-20220516223121-2444] to gather additional debugging logs...
	I0516 22:32:22.122475    8228 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444
	W0516 22:32:23.128246    8228 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 returned with exit code 1
	I0516 22:32:23.128372    8228 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444: (1.0057297s)
	I0516 22:32:23.128403    8228 network_create.go:275] error running [docker network inspect multinode-20220516223121-2444]: docker network inspect multinode-20220516223121-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220516223121-2444
	I0516 22:32:23.128453    8228 network_create.go:277] output of [docker network inspect multinode-20220516223121-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220516223121-2444
	
	** /stderr **
	I0516 22:32:23.137459    8228 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:32:24.192051    8228 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0545845s)
	I0516 22:32:24.208866    8228 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000354160] amended:true}} dirty:map[192.168.49.0:0xc000354160 192.168.58.0:0xc000724148 192.168.67.0:0xc000724450 192.168.76.0:0xc0005b8670] misses:2}
	I0516 22:32:24.208866    8228 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:32:24.223013    8228 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000354160] amended:true}} dirty:map[192.168.49.0:0xc000354160 192.168.58.0:0xc000724148 192.168.67.0:0xc000724450 192.168.76.0:0xc0005b8670] misses:3}
	I0516 22:32:24.223013    8228 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:32:24.237813    8228 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000354160 192.168.58.0:0xc000724148 192.168.67.0:0xc000724450 192.168.76.0:0xc0005b8670] amended:false}} dirty:map[] misses:0}
	I0516 22:32:24.237813    8228 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:32:24.254626    8228 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000354160 192.168.58.0:0xc000724148 192.168.67.0:0xc000724450 192.168.76.0:0xc0005b8670] amended:false}} dirty:map[] misses:0}
	I0516 22:32:24.255256    8228 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:32:24.271115    8228 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000354160 192.168.58.0:0xc000724148 192.168.67.0:0xc000724450 192.168.76.0:0xc0005b8670] amended:true}} dirty:map[192.168.49.0:0xc000354160 192.168.58.0:0xc000724148 192.168.67.0:0xc000724450 192.168.76.0:0xc0005b8670 192.168.85.0:0xc000724490] misses:0}
	I0516 22:32:24.271115    8228 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:32:24.271115    8228 network_create.go:115] attempt to create docker network multinode-20220516223121-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:32:24.279291    8228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444
	W0516 22:32:25.300634    8228 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444 returned with exit code 1
	I0516 22:32:25.300634    8228 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: (1.021066s)
	E0516 22:32:25.300634    8228 network_create.go:104] error while trying to create docker network multinode-20220516223121-2444 192.168.85.0/24: create docker network multinode-20220516223121-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d13f6eb26f59dff32402e890e3f815e3058ff091681e8cb5e72068be1e1fe583 (br-d13f6eb26f59): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:32:25.300634    8228 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220516223121-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d13f6eb26f59dff32402e890e3f815e3058ff091681e8cb5e72068be1e1fe583 (br-d13f6eb26f59): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220516223121-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d13f6eb26f59dff32402e890e3f815e3058ff091681e8cb5e72068be1e1fe583 (br-d13f6eb26f59): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:32:25.316203    8228 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:32:26.347622    8228 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0314119s)
	I0516 22:32:26.354622    8228 cli_runner.go:164] Run: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:32:27.395888    8228 cli_runner.go:211] docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:32:27.395888    8228 cli_runner.go:217] Completed: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0412594s)
	I0516 22:32:27.395888    8228 client.go:171] LocalClient.Create took 6.3698085s
	I0516 22:32:29.412519    8228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:32:29.418863    8228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:32:30.449606    8228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:32:30.449606    8228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0297257s)
	I0516 22:32:30.449606    8228 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:30.794117    8228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:32:31.818281    8228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:32:31.818281    8228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0240949s)
	W0516 22:32:31.818281    8228 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:32:31.818281    8228 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:31.831823    8228 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:32:31.841434    8228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:32:32.908267    8228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:32:32.908267    8228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0667679s)
	I0516 22:32:32.908267    8228 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:33.153259    8228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:32:34.182877    8228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:32:34.182991    8228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0295507s)
	W0516 22:32:34.183161    8228 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:32:34.183161    8228 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:34.183161    8228 start.go:134] duration metric: createHost completed in 13.1622552s
	I0516 22:32:34.195406    8228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:32:34.202399    8228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:32:35.216467    8228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:32:35.216467    8228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0140605s)
	I0516 22:32:35.216467    8228 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:35.483399    8228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:32:36.489902    8228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:32:36.489902    8228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0064961s)
	W0516 22:32:36.489902    8228 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:32:36.489902    8228 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:36.500491    8228 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:32:36.508247    8228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:32:37.524517    8228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:32:37.524681    8228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0159942s)
	I0516 22:32:37.524746    8228 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:37.748275    8228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:32:38.765034    8228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:32:38.765034    8228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0167515s)
	W0516 22:32:38.765034    8228 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:32:38.765034    8228 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:32:38.765034    8228 fix.go:57] fixHost completed within 45.0746159s
	I0516 22:32:38.765034    8228 start.go:81] releasing machines lock for "multinode-20220516223121-2444", held for 45.0746159s
	W0516 22:32:38.766066    8228 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-20220516223121-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220516223121-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	I0516 22:32:38.772403    8228 out.go:177] 
	W0516 22:32:38.774931    8228 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	W0516 22:32:38.776018    8228 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 22:32:38.776018    8228 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 22:32:38.779837    8228 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:85: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-20220516223121-2444 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220516223121-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220516223121-2444: exit status 1 (1.089066s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444: exit status 7 (2.7827821s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:32:42.758827    6044 status.go:247] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220516223121-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (81.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (16.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220516223121-2444 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220516223121-2444 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (1.8013554s)

                                                
                                                
** stderr ** 
	error: cluster "multinode-20220516223121-2444" does not exist

                                                
                                                
** /stderr **
multinode_test.go:481: failed to create busybox deployment to multinode cluster
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220516223121-2444 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220516223121-2444 -- rollout status deployment/busybox: exit status 1 (1.8431303s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220516223121-2444"

                                                
                                                
** /stderr **
multinode_test.go:486: failed to deploy busybox to multinode cluster
multinode_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220516223121-2444 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:490: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220516223121-2444 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (1.8501575s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220516223121-2444"

                                                
                                                
** /stderr **
multinode_test.go:492: failed to retrieve Pod IPs
multinode_test.go:496: expected 2 Pod IPs but got 1
multinode_test.go:502: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220516223121-2444 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:502: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220516223121-2444 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (1.835292s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220516223121-2444"

                                                
                                                
** /stderr **
multinode_test.go:504: failed get Pod names
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220516223121-2444 -- exec  -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220516223121-2444 -- exec  -- nslookup kubernetes.io: exit status 1 (1.8584931s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220516223121-2444"

                                                
                                                
** /stderr **
multinode_test.go:512: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220516223121-2444 -- exec  -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220516223121-2444 -- exec  -- nslookup kubernetes.default: exit status 1 (1.889769s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220516223121-2444"

                                                
                                                
** /stderr **
multinode_test.go:522: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220516223121-2444 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220516223121-2444 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (1.85852s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220516223121-2444"

                                                
                                                
** /stderr **
multinode_test.go:530: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220516223121-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220516223121-2444: exit status 1 (1.0510493s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444: exit status 7 (2.765935s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:32:59.536299    1692 status.go:247] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220516223121-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (16.77s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (5.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220516223121-2444 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:538: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220516223121-2444 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (1.8856181s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220516223121-2444"

                                                
                                                
** /stderr **
multinode_test.go:540: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220516223121-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220516223121-2444: exit status 1 (1.1134336s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444: exit status 7 (2.8380995s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:33:05.375951    5108 status.go:247] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220516223121-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (5.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (6.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220516223121-2444 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20220516223121-2444 -v 3 --alsologtostderr: exit status 80 (3.068027s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:33:05.641906    2980 out.go:296] Setting OutFile to fd 820 ...
	I0516 22:33:05.708187    2980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:33:05.708187    2980 out.go:309] Setting ErrFile to fd 836...
	I0516 22:33:05.708187    2980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:33:05.721823    2980 mustload.go:65] Loading cluster: multinode-20220516223121-2444
	I0516 22:33:05.723384    2980 config.go:178] Loaded profile config "multinode-20220516223121-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:33:05.739085    2980 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:33:08.192876    2980 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:33:08.192942    2980 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (2.4525659s)
	I0516 22:33:08.198048    2980 out.go:177] 
	W0516 22:33:08.200457    2980 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:33:08.200457    2980 out.go:239] * 
	* 
	W0516 22:33:08.436745    2980 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_23.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_23.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0516 22:33:08.439787    2980 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:110: failed to add node to current cluster. args "out/minikube-windows-amd64.exe node add -p multinode-20220516223121-2444 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220516223121-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220516223121-2444: exit status 1 (1.068884s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444: exit status 7 (2.7755597s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:33:12.300173    6352 status.go:247] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220516223121-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (6.92s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (7.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.805448s)
multinode_test.go:153: expected profile "multinode-20220516223121-2444" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-20220516223121-2444\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-20220516223121-2444\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriver
Mounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.23.6\",\"ClusterName\":\"multinode-20220516223121-2444\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":[{\"Component\":\"kubelet\",\"Key\":\"cni-conf-dir\",\"Value\":\"/etc/cni/net.mk\"}],\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"N
ame\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.23.6\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube2:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false}}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220516223121-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220516223121-2444: exit status 1 (1.102822s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444: exit status 7 (2.7814413s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:33:20.007871    8976 status.go:247] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220516223121-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (7.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status --output json --alsologtostderr: exit status 7 (2.7640289s)

                                                
                                                
-- stdout --
	{"Name":"multinode-20220516223121-2444","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:33:20.273868    6328 out.go:296] Setting OutFile to fd 884 ...
	I0516 22:33:20.333446    6328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:33:20.333446    6328 out.go:309] Setting ErrFile to fd 864...
	I0516 22:33:20.333446    6328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:33:20.343274    6328 out.go:303] Setting JSON to true
	I0516 22:33:20.343274    6328 mustload.go:65] Loading cluster: multinode-20220516223121-2444
	I0516 22:33:20.344277    6328 config.go:178] Loaded profile config "multinode-20220516223121-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:33:20.344277    6328 status.go:253] checking status of multinode-20220516223121-2444 ...
	I0516 22:33:20.360269    6328 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:33:22.771208    6328 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:33:22.771311    6328 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (2.4107298s)
	I0516 22:33:22.771402    6328 status.go:328] multinode-20220516223121-2444 host status = "" (err=state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	)
	I0516 22:33:22.771402    6328 status.go:255] multinode-20220516223121-2444 status: &{Name:multinode-20220516223121-2444 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0516 22:33:22.771402    6328 status.go:258] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	E0516 22:33:22.771402    6328 status.go:261] The "multinode-20220516223121-2444" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:178: failed to decode json from status: args "out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220516223121-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220516223121-2444: exit status 1 (1.1227656s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444: exit status 7 (2.7794062s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:33:26.684963    4436 status.go:247] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220516223121-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (6.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (10.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 node stop m03
multinode_test.go:208: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 node stop m03: exit status 85 (597.7772ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_a721422985a44b3996d93fcfe1a29c6759a29372_1.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:210: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 node stop m03": exit status 85
multinode_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status: exit status 7 (2.7622955s)

                                                
                                                
-- stdout --
	multinode-20220516223121-2444
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:33:30.046607    8484 status.go:258] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	E0516 22:33:30.046607    8484 status.go:261] The "multinode-20220516223121-2444" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:221: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status --alsologtostderr: exit status 7 (2.845385s)

                                                
                                                
-- stdout --
	multinode-20220516223121-2444
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:33:30.327124    7952 out.go:296] Setting OutFile to fd 828 ...
	I0516 22:33:30.396622    7952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:33:30.396622    7952 out.go:309] Setting ErrFile to fd 840...
	I0516 22:33:30.396622    7952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:33:30.409022    7952 out.go:303] Setting JSON to false
	I0516 22:33:30.409022    7952 mustload.go:65] Loading cluster: multinode-20220516223121-2444
	I0516 22:33:30.409885    7952 config.go:178] Loaded profile config "multinode-20220516223121-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:33:30.409885    7952 status.go:253] checking status of multinode-20220516223121-2444 ...
	I0516 22:33:30.428148    7952 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:33:32.892020    7952 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:33:32.892020    7952 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (2.4636252s)
	I0516 22:33:32.892207    7952 status.go:328] multinode-20220516223121-2444 host status = "" (err=state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	)
	I0516 22:33:32.892207    7952 status.go:255] multinode-20220516223121-2444 status: &{Name:multinode-20220516223121-2444 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0516 22:33:32.892207    7952 status.go:258] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	E0516 22:33:32.892207    7952 status.go:261] The "multinode-20220516223121-2444" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:227: incorrect number of running kubelets: args "out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status --alsologtostderr": multinode-20220516223121-2444
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:231: incorrect number of stopped hosts: args "out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status --alsologtostderr": multinode-20220516223121-2444
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:235: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status --alsologtostderr": multinode-20220516223121-2444
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220516223121-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220516223121-2444: exit status 1 (1.0672511s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444: exit status 7 (2.863774s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:33:36.834701    7888 status.go:247] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220516223121-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (10.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:242: (dbg) Done: docker version -f {{.Server.Version}}: (1.1077238s)
multinode_test.go:252: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 node start m03 --alsologtostderr: exit status 85 (584.2892ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:33:38.211933    4240 out.go:296] Setting OutFile to fd 852 ...
	I0516 22:33:38.286304    4240 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:33:38.286304    4240 out.go:309] Setting ErrFile to fd 1008...
	I0516 22:33:38.286304    4240 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:33:38.296317    4240 mustload.go:65] Loading cluster: multinode-20220516223121-2444
	I0516 22:33:38.296317    4240 config.go:178] Loaded profile config "multinode-20220516223121-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:33:38.302304    4240 out.go:177] 
	W0516 22:33:38.305336    4240 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	W0516 22:33:38.305336    4240 out.go:239] * 
	* 
	W0516 22:33:38.532436    4240 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_17.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_17.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0516 22:33:38.536553    4240 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:254: I0516 22:33:38.211933    4240 out.go:296] Setting OutFile to fd 852 ...
I0516 22:33:38.286304    4240 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0516 22:33:38.286304    4240 out.go:309] Setting ErrFile to fd 1008...
I0516 22:33:38.286304    4240 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0516 22:33:38.296317    4240 mustload.go:65] Loading cluster: multinode-20220516223121-2444
I0516 22:33:38.296317    4240 config.go:178] Loaded profile config "multinode-20220516223121-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0516 22:33:38.302304    4240 out.go:177] 
W0516 22:33:38.305336    4240 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
W0516 22:33:38.305336    4240 out.go:239] * 
* 
W0516 22:33:38.532436    4240 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                       │
│    * If the above advice does not help, please let us know:                                                           │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
│                                                                                                                       │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
│    * Please also attach the following file to the GitHub issue:                                                       │
│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_17.log    │
│                                                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                       │
│    * If the above advice does not help, please let us know:                                                           │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
│                                                                                                                       │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
│    * Please also attach the following file to the GitHub issue:                                                       │
│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_17.log    │
│                                                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0516 22:33:38.536553    4240 out.go:177] 
multinode_test.go:255: node start returned an error. args "out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 node start m03 --alsologtostderr": exit status 85
multinode_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status
multinode_test.go:259: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status: exit status 7 (2.7807923s)

                                                
                                                
-- stdout --
	multinode-20220516223121-2444
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:33:41.321919    3288 status.go:258] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	E0516 22:33:41.322016    3288 status.go:261] The "multinode-20220516223121-2444" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220516223121-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220516223121-2444: exit status 1 (1.073968s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444: exit status 7 (2.7463809s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:33:45.156251    8172 status.go:247] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220516223121-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (8.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (140.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220516223121-2444
multinode_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-20220516223121-2444
multinode_test.go:288: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p multinode-20220516223121-2444: exit status 82 (22.0397835s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-20220516223121-2444"  ...
	* Stopping node "multinode-20220516223121-2444"  ...
	* Stopping node "multinode-20220516223121-2444"  ...
	* Stopping node "multinode-20220516223121-2444"  ...
	* Stopping node "multinode-20220516223121-2444"  ...
	* Stopping node "multinode-20220516223121-2444"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:33:50.777437    7600 daemonize_windows.go:38] error terminating scheduled stop for profile multinode-20220516223121-2444: stopping schedule-stop service for profile multinode-20220516223121-2444: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect multinode-20220516223121-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_39.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:290: failed to run minikube stop. args "out/minikube-windows-amd64.exe node list -p multinode-20220516223121-2444" : exit status 82
multinode_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220516223121-2444 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220516223121-2444 --wait=true -v=8 --alsologtostderr: exit status 60 (1m53.171455s)

                                                
                                                
-- stdout --
	* [multinode-20220516223121-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20220516223121-2444 in cluster multinode-20220516223121-2444
	* Pulling base image ...
	* docker "multinode-20220516223121-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20220516223121-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:34:07.811946    7968 out.go:296] Setting OutFile to fd 864 ...
	I0516 22:34:07.870311    7968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:34:07.870381    7968 out.go:309] Setting ErrFile to fd 824...
	I0516 22:34:07.870425    7968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:34:07.881406    7968 out.go:303] Setting JSON to false
	I0516 22:34:07.882911    7968 start.go:115] hostinfo: {"hostname":"minikube2","uptime":3560,"bootTime":1652736887,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:34:07.882911    7968 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:34:07.888132    7968 out.go:177] * [multinode-20220516223121-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:34:07.890436    7968 notify.go:193] Checking for updates...
	I0516 22:34:07.892933    7968 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:34:07.895272    7968 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:34:07.897482    7968 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:34:07.900137    7968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:34:07.903132    7968 config.go:178] Loaded profile config "multinode-20220516223121-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:34:07.903224    7968 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:34:10.413773    7968 docker.go:137] docker version: linux-20.10.14
	I0516 22:34:10.420768    7968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:34:12.406746    7968 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9859645s)
	I0516 22:34:12.406746    7968 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:34:11.4100904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:34:12.412567    7968 out.go:177] * Using the docker driver based on existing profile
	I0516 22:34:12.414190    7968 start.go:284] selected driver: docker
	I0516 22:34:12.414190    7968 start.go:806] validating driver "docker" against &{Name:multinode-20220516223121-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220516223121-2444 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:34:12.414860    7968 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:34:12.436640    7968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:34:14.372636    7968 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9353869s)
	I0516 22:34:14.372880    7968 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:34:13.404053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:34:14.432646    7968 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 22:34:14.432646    7968 cni.go:95] Creating CNI manager for ""
	I0516 22:34:14.432646    7968 cni.go:156] 1 nodes found, recommending kindnet
	I0516 22:34:14.432646    7968 start_flags.go:306] config:
	{Name:multinode-20220516223121-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220516223121-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false}
	I0516 22:34:14.440119    7968 out.go:177] * Starting control plane node multinode-20220516223121-2444 in cluster multinode-20220516223121-2444
	I0516 22:34:14.442767    7968 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:34:14.446053    7968 out.go:177] * Pulling base image ...
	I0516 22:34:14.448308    7968 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 22:34:14.448387    7968 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:34:14.448537    7968 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 22:34:14.448583    7968 cache.go:57] Caching tarball of preloaded images
	I0516 22:34:14.449156    7968 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 22:34:14.449422    7968 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 22:34:14.449604    7968 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20220516223121-2444\config.json ...
	I0516 22:34:15.483947    7968 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:34:15.484121    7968 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:34:15.484121    7968 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:34:15.484121    7968 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:34:15.484121    7968 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:34:15.484121    7968 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:34:15.484725    7968 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:34:15.484850    7968 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:34:15.484914    7968 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:34:17.675944    7968 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-284519100: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-284519100: read-only file system"}
	I0516 22:34:17.676046    7968 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:34:17.676046    7968 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:34:17.676214    7968 start.go:352] acquiring machines lock for multinode-20220516223121-2444: {Name:mk85c04f827b76c021a94c8d716dce0669525244 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:34:17.676361    7968 start.go:356] acquired machines lock for "multinode-20220516223121-2444" in 105.1µs
	I0516 22:34:17.676627    7968 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:34:17.676668    7968 fix.go:55] fixHost starting: 
	I0516 22:34:17.702576    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:34:18.715840    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:34:18.716003    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0130089s)
	I0516 22:34:18.716087    7968 fix.go:103] recreateIfNeeded on multinode-20220516223121-2444: state= err=unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:18.716087    7968 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:34:18.719644    7968 out.go:177] * docker "multinode-20220516223121-2444" container is missing, will recreate.
	I0516 22:34:18.724162    7968 delete.go:124] DEMOLISHING multinode-20220516223121-2444 ...
	I0516 22:34:18.737873    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:34:19.767337    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:34:19.767420    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0290511s)
	W0516 22:34:19.767420    7968 stop.go:75] unable to get state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:19.767420    7968 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:19.783323    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:34:20.786824    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:34:20.786824    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0034939s)
	I0516 22:34:20.786824    7968 delete.go:82] Unable to get host status for multinode-20220516223121-2444, assuming it has already been deleted: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:20.796102    7968 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220516223121-2444
	W0516 22:34:21.817916    7968 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220516223121-2444 returned with exit code 1
	I0516 22:34:21.817916    7968 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220516223121-2444: (1.0218069s)
	I0516 22:34:21.817916    7968 kic.go:356] could not find the container multinode-20220516223121-2444 to remove it. will try anyways
	I0516 22:34:21.827186    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:34:22.869000    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:34:22.869170    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0416216s)
	W0516 22:34:22.869279    7968 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:22.879600    7968 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0"
	W0516 22:34:23.936881    7968 cli_runner.go:211] docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:34:23.936881    7968 cli_runner.go:217] Completed: docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0": (1.0572733s)
	I0516 22:34:23.936881    7968 oci.go:641] error shutdown multinode-20220516223121-2444: docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:24.954467    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:34:25.959657    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:34:25.959826    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0051679s)
	I0516 22:34:25.959904    7968 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:25.959938    7968 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:34:25.959992    7968 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:26.523970    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:34:27.533526    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:34:27.533685    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0095486s)
	I0516 22:34:27.533787    7968 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:27.533831    7968 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:34:27.533864    7968 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:28.628364    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:34:29.670318    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:34:29.670412    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0419467s)
	I0516 22:34:29.670475    7968 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:29.670475    7968 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:34:29.670564    7968 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:31.004120    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:34:32.008499    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:34:32.008499    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0043717s)
	I0516 22:34:32.008499    7968 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:32.008499    7968 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:34:32.008499    7968 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:33.613787    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:34:34.643008    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:34:34.643008    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0292136s)
	I0516 22:34:34.643008    7968 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:34.643008    7968 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:34:34.643008    7968 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:37.000935    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:34:38.030024    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:34:38.030024    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.029082s)
	I0516 22:34:38.030024    7968 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:38.030024    7968 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:34:38.030024    7968 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:42.554863    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:34:43.560145    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:34:43.560172    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0052468s)
	I0516 22:34:43.560172    7968 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:34:43.560172    7968 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:34:43.560172    7968 oci.go:88] couldn't shut down multinode-20220516223121-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	 
	I0516 22:34:43.569560    7968 cli_runner.go:164] Run: docker rm -f -v multinode-20220516223121-2444
	I0516 22:34:44.573729    7968 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220516223121-2444
	W0516 22:34:45.597441    7968 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220516223121-2444 returned with exit code 1
	I0516 22:34:45.597619    7968 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220516223121-2444: (1.0237039s)
	I0516 22:34:45.607892    7968 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:34:46.627963    7968 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:34:46.627963    7968 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0200643s)
	I0516 22:34:46.638418    7968 network_create.go:272] running [docker network inspect multinode-20220516223121-2444] to gather additional debugging logs...
	I0516 22:34:46.638639    7968 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444
	W0516 22:34:47.643864    7968 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 returned with exit code 1
	I0516 22:34:47.643864    7968 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444: (1.0050125s)
	I0516 22:34:47.643971    7968 network_create.go:275] error running [docker network inspect multinode-20220516223121-2444]: docker network inspect multinode-20220516223121-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220516223121-2444
	I0516 22:34:47.643971    7968 network_create.go:277] output of [docker network inspect multinode-20220516223121-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220516223121-2444
	
	** /stderr **
	W0516 22:34:47.645087    7968 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:34:47.645087    7968 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:34:48.651478    7968 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:34:48.655674    7968 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:34:48.656560    7968 start.go:165] libmachine.API.Create for "multinode-20220516223121-2444" (driver="docker")
	I0516 22:34:48.656648    7968 client.go:168] LocalClient.Create starting
	I0516 22:34:48.657501    7968 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:34:48.657696    7968 main.go:134] libmachine: Decoding PEM data...
	I0516 22:34:48.657696    7968 main.go:134] libmachine: Parsing certificate...
	I0516 22:34:48.657696    7968 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:34:48.657696    7968 main.go:134] libmachine: Decoding PEM data...
	I0516 22:34:48.658300    7968 main.go:134] libmachine: Parsing certificate...
	I0516 22:34:48.670521    7968 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:34:49.684127    7968 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:34:49.684127    7968 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.012594s)
	I0516 22:34:49.694347    7968 network_create.go:272] running [docker network inspect multinode-20220516223121-2444] to gather additional debugging logs...
	I0516 22:34:49.694347    7968 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444
	W0516 22:34:50.702644    7968 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 returned with exit code 1
	I0516 22:34:50.702644    7968 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444: (1.0081097s)
	I0516 22:34:50.702644    7968 network_create.go:275] error running [docker network inspect multinode-20220516223121-2444]: docker network inspect multinode-20220516223121-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220516223121-2444
	I0516 22:34:50.702644    7968 network_create.go:277] output of [docker network inspect multinode-20220516223121-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220516223121-2444
	
	** /stderr **
	I0516 22:34:50.710863    7968 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:34:51.749832    7968 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0380338s)
	I0516 22:34:51.766599    7968 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006f58] misses:0}
	I0516 22:34:51.767065    7968 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:34:51.767181    7968 network_create.go:115] attempt to create docker network multinode-20220516223121-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:34:51.774732    7968 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444
	W0516 22:34:52.810981    7968 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444 returned with exit code 1
	I0516 22:34:52.811015    7968 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: (1.0361666s)
	W0516 22:34:52.811112    7968 network_create.go:107] failed to create docker network multinode-20220516223121-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:34:52.827028    7968 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006f58] amended:false}} dirty:map[] misses:0}
	I0516 22:34:52.827028    7968 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:34:52.842249    7968 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006f58] amended:true}} dirty:map[192.168.49.0:0xc000006f58 192.168.58.0:0xc000400748] misses:0}
	I0516 22:34:52.842249    7968 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:34:52.842845    7968 network_create.go:115] attempt to create docker network multinode-20220516223121-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:34:52.850627    7968 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444
	W0516 22:34:53.894043    7968 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444 returned with exit code 1
	I0516 22:34:53.894092    7968 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: (1.0430811s)
	W0516 22:34:53.894136    7968 network_create.go:107] failed to create docker network multinode-20220516223121-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:34:53.909602    7968 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006f58] amended:true}} dirty:map[192.168.49.0:0xc000006f58 192.168.58.0:0xc000400748] misses:1}
	I0516 22:34:53.909602    7968 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:34:53.926070    7968 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006f58] amended:true}} dirty:map[192.168.49.0:0xc000006f58 192.168.58.0:0xc000400748 192.168.67.0:0xc0006149f0] misses:1}
	I0516 22:34:53.926070    7968 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:34:53.926070    7968 network_create.go:115] attempt to create docker network multinode-20220516223121-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:34:53.934369    7968 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444
	W0516 22:34:54.957270    7968 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444 returned with exit code 1
	I0516 22:34:54.957323    7968 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: (1.0227349s)
	W0516 22:34:54.957354    7968 network_create.go:107] failed to create docker network multinode-20220516223121-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:34:54.974174    7968 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006f58] amended:true}} dirty:map[192.168.49.0:0xc000006f58 192.168.58.0:0xc000400748 192.168.67.0:0xc0006149f0] misses:2}
	I0516 22:34:54.974174    7968 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:34:54.987907    7968 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006f58] amended:true}} dirty:map[192.168.49.0:0xc000006f58 192.168.58.0:0xc000400748 192.168.67.0:0xc0006149f0 192.168.76.0:0xc0005ce2a8] misses:2}
	I0516 22:34:54.987907    7968 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:34:54.987907    7968 network_create.go:115] attempt to create docker network multinode-20220516223121-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:34:54.998610    7968 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444
	W0516 22:34:56.024215    7968 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444 returned with exit code 1
	I0516 22:34:56.024215    7968 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: (1.025598s)
	E0516 22:34:56.024215    7968 network_create.go:104] error while trying to create docker network multinode-20220516223121-2444 192.168.76.0/24: create docker network multinode-20220516223121-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network abe6386c416e1b1165ef8da57eb7e9d5f7d7a80662ace50771da604cadc27d29 (br-abe6386c416e): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:34:56.024215    7968 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220516223121-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network abe6386c416e1b1165ef8da57eb7e9d5f7d7a80662ace50771da604cadc27d29 (br-abe6386c416e): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220516223121-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network abe6386c416e1b1165ef8da57eb7e9d5f7d7a80662ace50771da604cadc27d29 (br-abe6386c416e): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:34:56.038207    7968 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:34:57.088304    7968 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0500895s)
	I0516 22:34:57.096640    7968 cli_runner.go:164] Run: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:34:58.165397    7968 cli_runner.go:211] docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:34:58.165397    7968 cli_runner.go:217] Completed: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: (1.068513s)
	I0516 22:34:58.165587    7968 client.go:171] LocalClient.Create took 9.5088708s
	I0516 22:35:00.177592    7968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:35:00.186018    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:35:01.237043    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:01.237167    7968 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.050901s)
	I0516 22:35:01.237313    7968 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:01.421598    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:35:02.453830    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:02.453898    7968 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0319141s)
	W0516 22:35:02.454155    7968 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:35:02.454251    7968 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:02.465146    7968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:35:02.472470    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:35:03.501941    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:03.501941    7968 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0292867s)
	I0516 22:35:03.502231    7968 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:03.726342    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:35:04.763864    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:04.763864    7968 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0373129s)
	W0516 22:35:04.763864    7968 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:35:04.763864    7968 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:04.763864    7968 start.go:134] duration metric: createHost completed in 16.1122704s
	I0516 22:35:04.774527    7968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:35:04.782509    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:35:05.826465    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:05.826605    7968 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0439089s)
	I0516 22:35:05.826819    7968 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:06.164957    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:35:07.187069    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:07.187069    7968 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0221046s)
	W0516 22:35:07.187069    7968 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:35:07.187069    7968 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:07.197056    7968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:35:07.204058    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:35:08.203866    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:08.203866    7968 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:08.438201    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:35:09.461985    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:09.462060    7968 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0237117s)
	W0516 22:35:09.462129    7968 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:35:09.462129    7968 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:09.462129    7968 fix.go:57] fixHost completed within 51.785089s
	I0516 22:35:09.462129    7968 start.go:81] releasing machines lock for "multinode-20220516223121-2444", held for 51.7853205s
	W0516 22:35:09.462129    7968 start.go:608] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	W0516 22:35:09.462820    7968 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	I0516 22:35:09.462848    7968 start.go:623] Will try again in 5 seconds ...
	I0516 22:35:14.474294    7968 start.go:352] acquiring machines lock for multinode-20220516223121-2444: {Name:mk85c04f827b76c021a94c8d716dce0669525244 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:35:14.474294    7968 start.go:356] acquired machines lock for "multinode-20220516223121-2444" in 0s
	I0516 22:35:14.474294    7968 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:35:14.474294    7968 fix.go:55] fixHost starting: 
	I0516 22:35:14.497605    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:35:15.554980    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:35:15.554980    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0571762s)
	I0516 22:35:15.555335    7968 fix.go:103] recreateIfNeeded on multinode-20220516223121-2444: state= err=unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:15.555401    7968 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:35:15.560480    7968 out.go:177] * docker "multinode-20220516223121-2444" container is missing, will recreate.
	I0516 22:35:15.562801    7968 delete.go:124] DEMOLISHING multinode-20220516223121-2444 ...
	I0516 22:35:15.579831    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:35:16.610354    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:35:16.610536    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0303858s)
	W0516 22:35:16.610625    7968 stop.go:75] unable to get state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:16.610625    7968 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:16.630435    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:35:17.681233    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:35:17.681573    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0507899s)
	I0516 22:35:17.681601    7968 delete.go:82] Unable to get host status for multinode-20220516223121-2444, assuming it has already been deleted: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:17.690293    7968 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220516223121-2444
	W0516 22:35:18.697316    7968 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:18.697316    7968 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220516223121-2444: (1.0070159s)
	I0516 22:35:18.697316    7968 kic.go:356] could not find the container multinode-20220516223121-2444 to remove it. will try anyways
	I0516 22:35:18.706313    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:35:19.729498    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:35:19.729641    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.022964s)
	W0516 22:35:19.729641    7968 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:19.738522    7968 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0"
	W0516 22:35:20.753368    7968 cli_runner.go:211] docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:35:20.753368    7968 cli_runner.go:217] Completed: docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0": (1.0148387s)
	I0516 22:35:20.753368    7968 oci.go:641] error shutdown multinode-20220516223121-2444: docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:21.777064    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:35:22.818738    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:35:22.818834    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0416286s)
	I0516 22:35:22.818834    7968 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:22.818834    7968 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:35:22.818834    7968 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:23.332747    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:35:24.386110    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:35:24.386241    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0531534s)
	I0516 22:35:24.386509    7968 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:24.386572    7968 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:35:24.386616    7968 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:24.989448    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:35:26.026699    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:35:26.026699    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0372432s)
	I0516 22:35:26.026699    7968 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:26.026699    7968 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:35:26.026699    7968 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:26.932568    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:35:27.972971    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:35:27.972971    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0401901s)
	I0516 22:35:27.972971    7968 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:27.972971    7968 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:35:27.972971    7968 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:29.974540    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:35:30.999849    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:35:30.999849    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0252374s)
	I0516 22:35:30.999849    7968 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:30.999849    7968 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:35:30.999849    7968 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:32.833662    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:35:33.834467    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:35:33.834499    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0007436s)
	I0516 22:35:33.834680    7968 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:33.834718    7968 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:35:33.834718    7968 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:36.524590    7968 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:35:37.612952    7968 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:35:37.613220    7968 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.088354s)
	I0516 22:35:37.613271    7968 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:37.613338    7968 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:35:37.613391    7968 oci.go:88] couldn't shut down multinode-20220516223121-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	 
	I0516 22:35:37.622091    7968 cli_runner.go:164] Run: docker rm -f -v multinode-20220516223121-2444
	I0516 22:35:38.659483    7968 cli_runner.go:217] Completed: docker rm -f -v multinode-20220516223121-2444: (1.0371505s)
	I0516 22:35:38.668427    7968 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220516223121-2444
	W0516 22:35:39.707953    7968 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:39.708074    7968 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220516223121-2444: (1.0393027s)
	I0516 22:35:39.716942    7968 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:35:40.757520    7968 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:35:40.757626    7968 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0401896s)
	I0516 22:35:40.770050    7968 network_create.go:272] running [docker network inspect multinode-20220516223121-2444] to gather additional debugging logs...
	I0516 22:35:40.770050    7968 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444
	W0516 22:35:41.825920    7968 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:41.826062    7968 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444: (1.0558142s)
	I0516 22:35:41.826094    7968 network_create.go:275] error running [docker network inspect multinode-20220516223121-2444]: docker network inspect multinode-20220516223121-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220516223121-2444
	I0516 22:35:41.826144    7968 network_create.go:277] output of [docker network inspect multinode-20220516223121-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220516223121-2444
	
	** /stderr **
	W0516 22:35:41.827264    7968 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:35:41.827313    7968 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:35:42.831978    7968 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:35:42.836505    7968 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:35:42.836505    7968 start.go:165] libmachine.API.Create for "multinode-20220516223121-2444" (driver="docker")
	I0516 22:35:42.836505    7968 client.go:168] LocalClient.Create starting
	I0516 22:35:42.837206    7968 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:35:42.837206    7968 main.go:134] libmachine: Decoding PEM data...
	I0516 22:35:42.837206    7968 main.go:134] libmachine: Parsing certificate...
	I0516 22:35:42.837964    7968 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:35:42.837964    7968 main.go:134] libmachine: Decoding PEM data...
	I0516 22:35:42.837964    7968 main.go:134] libmachine: Parsing certificate...
	I0516 22:35:42.847064    7968 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:35:43.925724    7968 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:35:43.925724    7968 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0786525s)
	I0516 22:35:43.935917    7968 network_create.go:272] running [docker network inspect multinode-20220516223121-2444] to gather additional debugging logs...
	I0516 22:35:43.935917    7968 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444
	W0516 22:35:44.958113    7968 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:44.958113    7968 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444: (1.0220206s)
	I0516 22:35:44.958113    7968 network_create.go:275] error running [docker network inspect multinode-20220516223121-2444]: docker network inspect multinode-20220516223121-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220516223121-2444
	I0516 22:35:44.958113    7968 network_create.go:277] output of [docker network inspect multinode-20220516223121-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220516223121-2444
	
	** /stderr **
	I0516 22:35:44.967212    7968 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:35:45.973997    7968 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0065622s)
	I0516 22:35:45.990836    7968 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006f58] amended:true}} dirty:map[192.168.49.0:0xc000006f58 192.168.58.0:0xc000400748 192.168.67.0:0xc0006149f0 192.168.76.0:0xc0005ce2a8] misses:2}
	I0516 22:35:45.990836    7968 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:35:46.004820    7968 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006f58] amended:true}} dirty:map[192.168.49.0:0xc000006f58 192.168.58.0:0xc000400748 192.168.67.0:0xc0006149f0 192.168.76.0:0xc0005ce2a8] misses:3}
	I0516 22:35:46.004820    7968 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:35:46.020967    7968 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006f58 192.168.58.0:0xc000400748 192.168.67.0:0xc0006149f0 192.168.76.0:0xc0005ce2a8] amended:false}} dirty:map[] misses:0}
	I0516 22:35:46.020967    7968 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:35:46.035900    7968 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006f58 192.168.58.0:0xc000400748 192.168.67.0:0xc0006149f0 192.168.76.0:0xc0005ce2a8] amended:false}} dirty:map[] misses:0}
	I0516 22:35:46.035900    7968 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:35:46.051009    7968 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006f58 192.168.58.0:0xc000400748 192.168.67.0:0xc0006149f0 192.168.76.0:0xc0005ce2a8] amended:true}} dirty:map[192.168.49.0:0xc000006f58 192.168.58.0:0xc000400748 192.168.67.0:0xc0006149f0 192.168.76.0:0xc0005ce2a8 192.168.85.0:0xc0006147e0] misses:0}
	I0516 22:35:46.052007    7968 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:35:46.052007    7968 network_create.go:115] attempt to create docker network multinode-20220516223121-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:35:46.060748    7968 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444
	W0516 22:35:47.095278    7968 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:47.095278    7968 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: (1.0344902s)
	E0516 22:35:47.095278    7968 network_create.go:104] error while trying to create docker network multinode-20220516223121-2444 192.168.85.0/24: create docker network multinode-20220516223121-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 282c0aeb47aed56b239e09c00846eaefa50420d7277b72f9f196a1feb253e676 (br-282c0aeb47ae): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:35:47.095278    7968 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220516223121-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 282c0aeb47aed56b239e09c00846eaefa50420d7277b72f9f196a1feb253e676 (br-282c0aeb47ae): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220516223121-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 282c0aeb47aed56b239e09c00846eaefa50420d7277b72f9f196a1feb253e676 (br-282c0aeb47ae): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:35:47.111000    7968 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:35:48.153396    7968 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.042389s)
	I0516 22:35:48.162649    7968 cli_runner.go:164] Run: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:35:49.202355    7968 cli_runner.go:211] docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:35:49.202526    7968 cli_runner.go:217] Completed: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0396985s)
	I0516 22:35:49.202526    7968 client.go:171] LocalClient.Create took 6.3659746s
	I0516 22:35:51.229673    7968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:35:51.238215    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:35:52.254395    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:52.254428    7968 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0157928s)
	I0516 22:35:52.254556    7968 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:52.538615    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:35:53.552130    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:53.552166    7968 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0133773s)
	W0516 22:35:53.552511    7968 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:35:53.552600    7968 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:53.563988    7968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:35:53.570210    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:35:54.596142    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:54.596142    7968 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0259249s)
	I0516 22:35:54.596142    7968 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:54.806425    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:35:55.820890    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:55.820890    7968 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0144573s)
	W0516 22:35:55.820890    7968 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:35:55.820890    7968 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:55.820890    7968 start.go:134] duration metric: createHost completed in 12.9885746s
	I0516 22:35:55.832702    7968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:35:55.839814    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:35:56.875817    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:56.875817    7968 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0359956s)
	I0516 22:35:56.875817    7968 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:57.215685    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:35:58.251514    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:58.251514    7968 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0356218s)
	W0516 22:35:58.251514    7968 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:35:58.251514    7968 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:58.263564    7968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:35:58.270566    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:35:59.319409    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:35:59.319409    7968 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0478375s)
	I0516 22:35:59.319409    7968 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:35:59.671465    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:36:00.709578    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:36:00.709835    7968 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0381053s)
	W0516 22:36:00.709903    7968 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:36:00.709903    7968 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:36:00.709903    7968 fix.go:57] fixHost completed within 46.2352752s
	I0516 22:36:00.709903    7968 start.go:81] releasing machines lock for "multinode-20220516223121-2444", held for 46.2352752s
	W0516 22:36:00.710582    7968 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-20220516223121-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220516223121-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	I0516 22:36:00.715943    7968 out.go:177] 
	W0516 22:36:00.717935    7968 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	W0516 22:36:00.717935    7968 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 22:36:00.717935    7968 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 22:36:00.721934    7968 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:295: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-20220516223121-2444" : exit status 60
multinode_test.go:298: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220516223121-2444
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220516223121-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220516223121-2444: exit status 1 (1.087129s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444: exit status 7 (2.9480049s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:36:05.318584    7788 status.go:247] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220516223121-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (140.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 node delete m03
multinode_test.go:392: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 node delete m03: exit status 80 (3.1533073s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_207105384607abbf0a822abec5db82084f27bc08_4.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:394: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 node delete m03": exit status 80
multinode_test.go:398: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status --alsologtostderr
multinode_test.go:398: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status --alsologtostderr: exit status 7 (2.7755947s)

                                                
                                                
-- stdout --
	multinode-20220516223121-2444
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:36:08.741216    6992 out.go:296] Setting OutFile to fd 752 ...
	I0516 22:36:08.801583    6992 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:36:08.801583    6992 out.go:309] Setting ErrFile to fd 836...
	I0516 22:36:08.801583    6992 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:36:08.810889    6992 out.go:303] Setting JSON to false
	I0516 22:36:08.810889    6992 mustload.go:65] Loading cluster: multinode-20220516223121-2444
	I0516 22:36:08.811722    6992 config.go:178] Loaded profile config "multinode-20220516223121-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:36:08.811722    6992 status.go:253] checking status of multinode-20220516223121-2444 ...
	I0516 22:36:08.829596    6992 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:36:11.246929    6992 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:36:11.246956    6992 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (2.4171026s)
	I0516 22:36:11.246956    6992 status.go:328] multinode-20220516223121-2444 host status = "" (err=state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	)
	I0516 22:36:11.246956    6992 status.go:255] multinode-20220516223121-2444 status: &{Name:multinode-20220516223121-2444 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0516 22:36:11.246956    6992 status.go:258] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	E0516 22:36:11.246956    6992 status.go:261] The "multinode-20220516223121-2444" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:400: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220516223121-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220516223121-2444: exit status 1 (1.1412905s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444: exit status 7 (2.8749357s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:36:15.278490    6236 status.go:247] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220516223121-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (31.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 stop
multinode_test.go:312: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 stop: exit status 82 (22.1252259s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-20220516223121-2444"  ...
	* Stopping node "multinode-20220516223121-2444"  ...
	* Stopping node "multinode-20220516223121-2444"  ...
	* Stopping node "multinode-20220516223121-2444"  ...
	* Stopping node "multinode-20220516223121-2444"  ...
	* Stopping node "multinode-20220516223121-2444"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:36:20.548319    3724 daemonize_windows.go:38] error terminating scheduled stop for profile multinode-20220516223121-2444: stopping schedule-stop service for profile multinode-20220516223121-2444: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect multinode-20220516223121-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_39.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:314: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 stop": exit status 82
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status: exit status 7 (2.7546414s)

                                                
                                                
-- stdout --
	multinode-20220516223121-2444
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:36:40.158130    7180 status.go:258] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	E0516 22:36:40.158130    7180 status.go:261] The "multinode-20220516223121-2444" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status --alsologtostderr: exit status 7 (2.7443072s)

                                                
                                                
-- stdout --
	multinode-20220516223121-2444
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:36:40.420786    7524 out.go:296] Setting OutFile to fd 664 ...
	I0516 22:36:40.480526    7524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:36:40.480526    7524 out.go:309] Setting ErrFile to fd 868...
	I0516 22:36:40.480526    7524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:36:40.489850    7524 out.go:303] Setting JSON to false
	I0516 22:36:40.489850    7524 mustload.go:65] Loading cluster: multinode-20220516223121-2444
	I0516 22:36:40.490662    7524 config.go:178] Loaded profile config "multinode-20220516223121-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:36:40.490662    7524 status.go:253] checking status of multinode-20220516223121-2444 ...
	I0516 22:36:40.511157    7524 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:36:42.902509    7524 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:36:42.902596    7524 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (2.3911171s)
	I0516 22:36:42.902696    7524 status.go:328] multinode-20220516223121-2444 host status = "" (err=state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	)
	I0516 22:36:42.902748    7524 status.go:255] multinode-20220516223121-2444 status: &{Name:multinode-20220516223121-2444 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0516 22:36:42.902838    7524 status.go:258] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	E0516 22:36:42.902838    7524 status.go:261] The "multinode-20220516223121-2444" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:331: incorrect number of stopped hosts: args "out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status --alsologtostderr": multinode-20220516223121-2444
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:335: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-20220516223121-2444 status --alsologtostderr": multinode-20220516223121-2444
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220516223121-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220516223121-2444: exit status 1 (1.0966534s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444: exit status 7 (2.8181762s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:36:46.829527    2816 status.go:247] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220516223121-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (31.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (118.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:342: (dbg) Done: docker version -f {{.Server.Version}}: (1.1047621s)
multinode_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220516223121-2444 --wait=true -v=8 --alsologtostderr --driver=docker
multinode_test.go:352: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220516223121-2444 --wait=true -v=8 --alsologtostderr --driver=docker: exit status 60 (1m53.1298611s)

                                                
                                                
-- stdout --
	* [multinode-20220516223121-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20220516223121-2444 in cluster multinode-20220516223121-2444
	* Pulling base image ...
	* docker "multinode-20220516223121-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20220516223121-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:36:48.213727    5648 out.go:296] Setting OutFile to fd 736 ...
	I0516 22:36:48.269140    5648 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:36:48.269140    5648 out.go:309] Setting ErrFile to fd 944...
	I0516 22:36:48.269140    5648 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:36:48.279499    5648 out.go:303] Setting JSON to false
	I0516 22:36:48.281892    5648 start.go:115] hostinfo: {"hostname":"minikube2","uptime":3720,"bootTime":1652736888,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:36:48.281892    5648 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:36:48.286568    5648 out.go:177] * [multinode-20220516223121-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:36:48.289187    5648 notify.go:193] Checking for updates...
	I0516 22:36:48.292464    5648 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:36:48.295289    5648 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:36:48.298597    5648 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:36:48.300592    5648 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:36:48.306518    5648 config.go:178] Loaded profile config "multinode-20220516223121-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:36:48.307212    5648 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:36:50.804966    5648 docker.go:137] docker version: linux-20.10.14
	I0516 22:36:50.813090    5648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:36:52.839091    5648 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0258525s)
	I0516 22:36:52.840221    5648 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:36:51.8096062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:36:52.844863    5648 out.go:177] * Using the docker driver based on existing profile
	I0516 22:36:52.847137    5648 start.go:284] selected driver: docker
	I0516 22:36:52.847137    5648 start.go:806] validating driver "docker" against &{Name:multinode-20220516223121-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220516223121-2444 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:36:52.847137    5648 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:36:52.870331    5648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:36:54.878325    5648 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0079801s)
	I0516 22:36:54.878609    5648 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:36:53.8578168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:36:54.935502    5648 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 22:36:54.935574    5648 cni.go:95] Creating CNI manager for ""
	I0516 22:36:54.935618    5648 cni.go:156] 1 nodes found, recommending kindnet
	I0516 22:36:54.935645    5648 start_flags.go:306] config:
	{Name:multinode-20220516223121-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220516223121-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false}
	I0516 22:36:54.942788    5648 out.go:177] * Starting control plane node multinode-20220516223121-2444 in cluster multinode-20220516223121-2444
	I0516 22:36:54.945417    5648 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:36:54.947826    5648 out.go:177] * Pulling base image ...
	I0516 22:36:54.950854    5648 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 22:36:54.951244    5648 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:36:54.951244    5648 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 22:36:54.951244    5648 cache.go:57] Caching tarball of preloaded images
	I0516 22:36:54.951244    5648 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 22:36:54.951981    5648 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 22:36:54.952210    5648 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20220516223121-2444\config.json ...
	I0516 22:36:56.036037    5648 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:36:56.036113    5648 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:36:56.036467    5648 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:36:56.036560    5648 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:36:56.036805    5648 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:36:56.036805    5648 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:36:56.037003    5648 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:36:56.037046    5648 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:36:56.037155    5648 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:36:58.216363    5648 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-139471595: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-139471595: read-only file system"}
	I0516 22:36:58.217319    5648 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:36:58.217455    5648 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:36:58.217626    5648 start.go:352] acquiring machines lock for multinode-20220516223121-2444: {Name:mk85c04f827b76c021a94c8d716dce0669525244 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:36:58.217743    5648 start.go:356] acquired machines lock for "multinode-20220516223121-2444" in 0s
	I0516 22:36:58.217743    5648 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:36:58.217743    5648 fix.go:55] fixHost starting: 
	I0516 22:36:58.238164    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:36:59.295944    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:36:59.296210    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0577723s)
	I0516 22:36:59.296279    5648 fix.go:103] recreateIfNeeded on multinode-20220516223121-2444: state= err=unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:36:59.296345    5648 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:36:59.300143    5648 out.go:177] * docker "multinode-20220516223121-2444" container is missing, will recreate.
	I0516 22:36:59.302103    5648 delete.go:124] DEMOLISHING multinode-20220516223121-2444 ...
	I0516 22:36:59.315194    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:37:00.346231    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:37:00.346231    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0310292s)
	W0516 22:37:00.346231    5648 stop.go:75] unable to get state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:00.346231    5648 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:00.361180    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:37:01.390170    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:37:01.390170    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0289823s)
	I0516 22:37:01.390170    5648 delete.go:82] Unable to get host status for multinode-20220516223121-2444, assuming it has already been deleted: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:01.399715    5648 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220516223121-2444
	W0516 22:37:02.467335    5648 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:02.467335    5648 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220516223121-2444: (1.0676123s)
	I0516 22:37:02.467429    5648 kic.go:356] could not find the container multinode-20220516223121-2444 to remove it. will try anyways
	I0516 22:37:02.477210    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:37:03.503749    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:37:03.503749    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0263643s)
	W0516 22:37:03.503866    5648 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:03.512152    5648 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0"
	W0516 22:37:04.573108    5648 cli_runner.go:211] docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:37:04.573151    5648 cli_runner.go:217] Completed: docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0": (1.060748s)
	I0516 22:37:04.573221    5648 oci.go:641] error shutdown multinode-20220516223121-2444: docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:05.586105    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:37:06.654303    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:37:06.654303    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0679113s)
	I0516 22:37:06.654303    5648 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:06.654303    5648 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:37:06.654303    5648 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:07.223840    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:37:08.271789    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:37:08.271789    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0479418s)
	I0516 22:37:08.271789    5648 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:08.274773    5648 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:37:08.274773    5648 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:09.371971    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:37:10.380672    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:37:10.380897    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0086936s)
	I0516 22:37:10.380976    5648 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:10.381038    5648 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:37:10.381117    5648 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:11.712594    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:37:12.757550    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:37:12.757550    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0449482s)
	I0516 22:37:12.757550    5648 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:12.757550    5648 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:37:12.757550    5648 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:14.360673    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:37:15.402506    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:37:15.402506    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0416383s)
	I0516 22:37:15.402506    5648 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:15.402506    5648 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:37:15.402506    5648 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:17.756962    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:37:18.794427    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:37:18.794473    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0372611s)
	I0516 22:37:18.794768    5648 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:18.794810    5648 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:37:18.794834    5648 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:23.311959    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:37:24.344561    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:37:24.344635    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0325178s)
	I0516 22:37:24.344699    5648 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:24.344746    5648 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:37:24.344856    5648 oci.go:88] couldn't shut down multinode-20220516223121-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	 
	I0516 22:37:24.354091    5648 cli_runner.go:164] Run: docker rm -f -v multinode-20220516223121-2444
	I0516 22:37:25.368868    5648 cli_runner.go:217] Completed: docker rm -f -v multinode-20220516223121-2444: (1.0146179s)
	I0516 22:37:25.377767    5648 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220516223121-2444
	W0516 22:37:26.416584    5648 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:26.416706    5648 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220516223121-2444: (1.0386063s)
	I0516 22:37:26.425233    5648 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:37:27.480077    5648 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:37:27.480077    5648 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0548368s)
	I0516 22:37:27.488267    5648 network_create.go:272] running [docker network inspect multinode-20220516223121-2444] to gather additional debugging logs...
	I0516 22:37:27.488267    5648 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444
	W0516 22:37:28.481910    5648 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:28.481910    5648 network_create.go:275] error running [docker network inspect multinode-20220516223121-2444]: docker network inspect multinode-20220516223121-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220516223121-2444
	I0516 22:37:28.481910    5648 network_create.go:277] output of [docker network inspect multinode-20220516223121-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220516223121-2444
	
	** /stderr **
	W0516 22:37:28.483704    5648 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:37:28.483758    5648 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:37:29.486777    5648 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:37:29.491855    5648 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:37:29.491855    5648 start.go:165] libmachine.API.Create for "multinode-20220516223121-2444" (driver="docker")
	I0516 22:37:29.491855    5648 client.go:168] LocalClient.Create starting
	I0516 22:37:29.493242    5648 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:37:29.493350    5648 main.go:134] libmachine: Decoding PEM data...
	I0516 22:37:29.493350    5648 main.go:134] libmachine: Parsing certificate...
	I0516 22:37:29.493350    5648 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:37:29.493350    5648 main.go:134] libmachine: Decoding PEM data...
	I0516 22:37:29.493926    5648 main.go:134] libmachine: Parsing certificate...
	I0516 22:37:29.502895    5648 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:37:30.533551    5648 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:37:30.533580    5648 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0303963s)
	I0516 22:37:30.542531    5648 network_create.go:272] running [docker network inspect multinode-20220516223121-2444] to gather additional debugging logs...
	I0516 22:37:30.542531    5648 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444
	W0516 22:37:31.565599    5648 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:31.565599    5648 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444: (1.0230599s)
	I0516 22:37:31.565599    5648 network_create.go:275] error running [docker network inspect multinode-20220516223121-2444]: docker network inspect multinode-20220516223121-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220516223121-2444
	I0516 22:37:31.565599    5648 network_create.go:277] output of [docker network inspect multinode-20220516223121-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220516223121-2444
	
	** /stderr **
	I0516 22:37:31.575373    5648 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:37:32.602374    5648 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.026812s)
	I0516 22:37:32.619826    5648 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000594140] misses:0}
	I0516 22:37:32.619826    5648 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:37:32.619826    5648 network_create.go:115] attempt to create docker network multinode-20220516223121-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:37:32.627246    5648 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444
	W0516 22:37:33.671227    5648 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:33.671298    5648 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: (1.0438395s)
	W0516 22:37:33.671361    5648 network_create.go:107] failed to create docker network multinode-20220516223121-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:37:33.686975    5648 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000594140] amended:false}} dirty:map[] misses:0}
	I0516 22:37:33.686975    5648 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:37:33.700811    5648 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000594140] amended:true}} dirty:map[192.168.49.0:0xc000594140 192.168.58.0:0xc0004ec678] misses:0}
	I0516 22:37:33.700811    5648 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:37:33.700811    5648 network_create.go:115] attempt to create docker network multinode-20220516223121-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:37:33.711023    5648 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444
	W0516 22:37:34.738650    5648 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:34.738650    5648 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: (1.0276196s)
	W0516 22:37:34.738650    5648 network_create.go:107] failed to create docker network multinode-20220516223121-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:37:34.754040    5648 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000594140] amended:true}} dirty:map[192.168.49.0:0xc000594140 192.168.58.0:0xc0004ec678] misses:1}
	I0516 22:37:34.754040    5648 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:37:34.768303    5648 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000594140] amended:true}} dirty:map[192.168.49.0:0xc000594140 192.168.58.0:0xc0004ec678 192.168.67.0:0xc0004ec730] misses:1}
	I0516 22:37:34.768303    5648 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:37:34.768303    5648 network_create.go:115] attempt to create docker network multinode-20220516223121-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:37:34.776633    5648 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444
	W0516 22:37:35.798591    5648 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:35.798665    5648 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: (1.0212949s)
	W0516 22:37:35.798718    5648 network_create.go:107] failed to create docker network multinode-20220516223121-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:37:35.813469    5648 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000594140] amended:true}} dirty:map[192.168.49.0:0xc000594140 192.168.58.0:0xc0004ec678 192.168.67.0:0xc0004ec730] misses:2}
	I0516 22:37:35.813469    5648 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:37:35.827391    5648 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000594140] amended:true}} dirty:map[192.168.49.0:0xc000594140 192.168.58.0:0xc0004ec678 192.168.67.0:0xc0004ec730 192.168.76.0:0xc0005941e0] misses:2}
	I0516 22:37:35.827391    5648 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:37:35.827391    5648 network_create.go:115] attempt to create docker network multinode-20220516223121-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:37:35.837928    5648 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444
	W0516 22:37:36.893547    5648 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:36.893612    5648 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: (1.0554918s)
	E0516 22:37:36.893728    5648 network_create.go:104] error while trying to create docker network multinode-20220516223121-2444 192.168.76.0/24: create docker network multinode-20220516223121-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ee4edd305cb7c016392076dd9885280309e37e05522b22526533304d979f5947 (br-ee4edd305cb7): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:37:36.893975    5648 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220516223121-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ee4edd305cb7c016392076dd9885280309e37e05522b22526533304d979f5947 (br-ee4edd305cb7): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220516223121-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ee4edd305cb7c016392076dd9885280309e37e05522b22526533304d979f5947 (br-ee4edd305cb7): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:37:36.909980    5648 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:37:37.957877    5648 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0478885s)
	I0516 22:37:37.965914    5648 cli_runner.go:164] Run: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:37:38.963262    5648 cli_runner.go:211] docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:37:38.963262    5648 client.go:171] LocalClient.Create took 9.471337s
	I0516 22:37:40.988782    5648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:37:40.999226    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:37:42.041171    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:42.041171    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0419372s)
	I0516 22:37:42.041171    5648 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:42.215789    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:37:43.237571    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:43.237571    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0217737s)
	W0516 22:37:43.237571    5648 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:37:43.237571    5648 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:43.249783    5648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:37:43.255892    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:37:44.270091    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:44.270091    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0139909s)
	I0516 22:37:44.270091    5648 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:44.484023    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:37:45.510924    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:45.510924    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0267309s)
	W0516 22:37:45.510924    5648 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:37:45.510924    5648 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:45.510924    5648 start.go:134] duration metric: createHost completed in 16.0240291s
	I0516 22:37:45.523670    5648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:37:45.532476    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:37:46.560718    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:46.560718    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0282351s)
	I0516 22:37:46.560718    5648 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:46.915480    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:37:47.920858    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:47.920858    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0053704s)
	W0516 22:37:47.920858    5648 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:37:47.920858    5648 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:47.932143    5648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:37:47.939095    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:37:48.933294    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:48.933596    5648 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:49.174596    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:37:50.216728    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:50.216728    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0421245s)
	W0516 22:37:50.216728    5648 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:37:50.216728    5648 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:50.216728    5648 fix.go:57] fixHost completed within 51.9986018s
	I0516 22:37:50.216728    5648 start.go:81] releasing machines lock for "multinode-20220516223121-2444", held for 51.9986018s
	W0516 22:37:50.216728    5648 start.go:608] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	W0516 22:37:50.217714    5648 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	I0516 22:37:50.217714    5648 start.go:623] Will try again in 5 seconds ...
	I0516 22:37:55.229988    5648 start.go:352] acquiring machines lock for multinode-20220516223121-2444: {Name:mk85c04f827b76c021a94c8d716dce0669525244 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:37:55.229988    5648 start.go:356] acquired machines lock for "multinode-20220516223121-2444" in 0s
	I0516 22:37:55.230512    5648 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:37:55.230552    5648 fix.go:55] fixHost starting: 
	I0516 22:37:55.249778    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:37:56.266177    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:37:56.266259    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0162976s)
	I0516 22:37:56.266259    5648 fix.go:103] recreateIfNeeded on multinode-20220516223121-2444: state= err=unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:56.266259    5648 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:37:56.269985    5648 out.go:177] * docker "multinode-20220516223121-2444" container is missing, will recreate.
	I0516 22:37:56.273417    5648 delete.go:124] DEMOLISHING multinode-20220516223121-2444 ...
	I0516 22:37:56.287196    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:37:57.282105    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	W0516 22:37:57.282199    5648 stop.go:75] unable to get state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:57.282228    5648 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:57.297804    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:37:58.299532    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:37:58.299563    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0015545s)
	I0516 22:37:58.299663    5648 delete.go:82] Unable to get host status for multinode-20220516223121-2444, assuming it has already been deleted: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:37:58.308418    5648 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220516223121-2444
	W0516 22:37:59.333288    5648 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220516223121-2444 returned with exit code 1
	I0516 22:37:59.333288    5648 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220516223121-2444: (1.0248626s)
	I0516 22:37:59.333288    5648 kic.go:356] could not find the container multinode-20220516223121-2444 to remove it. will try anyways
	I0516 22:37:59.342621    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:38:00.359665    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:38:00.359737    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0165902s)
	W0516 22:38:00.359737    5648 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:00.368837    5648 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0"
	W0516 22:38:01.379520    5648 cli_runner.go:211] docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:38:01.379750    5648 cli_runner.go:217] Completed: docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0": (1.0104683s)
	I0516 22:38:01.379820    5648 oci.go:641] error shutdown multinode-20220516223121-2444: docker exec --privileged -t multinode-20220516223121-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:02.393328    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:38:03.460318    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:38:03.460410    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0668229s)
	I0516 22:38:03.460559    5648 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:03.460559    5648 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:38:03.460559    5648 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:03.965325    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:38:04.979472    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:38:04.979656    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0139137s)
	I0516 22:38:04.979745    5648 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:04.979769    5648 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:38:04.979769    5648 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:05.585483    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:38:06.583092    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:38:06.583256    5648 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:06.583396    5648 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:38:06.583396    5648 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:07.490462    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:38:08.518019    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:38:08.518019    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0275491s)
	I0516 22:38:08.518019    5648 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:08.518019    5648 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:38:08.518019    5648 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:10.531891    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:38:11.537202    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:38:11.537266    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0051911s)
	I0516 22:38:11.537266    5648 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:11.537266    5648 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:38:11.537266    5648 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:13.371196    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:38:14.395356    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:38:14.395569    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0239527s)
	I0516 22:38:14.395680    5648 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:14.395732    5648 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:38:14.395760    5648 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:17.091860    5648 cli_runner.go:164] Run: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}
	W0516 22:38:18.117621    5648 cli_runner.go:211] docker container inspect multinode-20220516223121-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:38:18.117621    5648 cli_runner.go:217] Completed: docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: (1.0257531s)
	I0516 22:38:18.117621    5648 oci.go:653] temporary error verifying shutdown: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:18.117621    5648 oci.go:655] temporary error: container multinode-20220516223121-2444 status is  but expect it to be exited
	I0516 22:38:18.117621    5648 oci.go:88] couldn't shut down multinode-20220516223121-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	 
	I0516 22:38:18.126785    5648 cli_runner.go:164] Run: docker rm -f -v multinode-20220516223121-2444
	I0516 22:38:19.146667    5648 cli_runner.go:217] Completed: docker rm -f -v multinode-20220516223121-2444: (1.019758s)
	I0516 22:38:19.155875    5648 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220516223121-2444
	W0516 22:38:20.177997    5648 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220516223121-2444 returned with exit code 1
	I0516 22:38:20.178151    5648 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220516223121-2444: (1.0219616s)
	I0516 22:38:20.186762    5648 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:38:21.185645    5648 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:38:21.192710    5648 network_create.go:272] running [docker network inspect multinode-20220516223121-2444] to gather additional debugging logs...
	I0516 22:38:21.192710    5648 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444
	W0516 22:38:22.191805    5648 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 returned with exit code 1
	I0516 22:38:22.191805    5648 network_create.go:275] error running [docker network inspect multinode-20220516223121-2444]: docker network inspect multinode-20220516223121-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220516223121-2444
	I0516 22:38:22.191805    5648 network_create.go:277] output of [docker network inspect multinode-20220516223121-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220516223121-2444
	
	** /stderr **
	W0516 22:38:22.191805    5648 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:38:22.191805    5648 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:38:23.194561    5648 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:38:23.198205    5648 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:38:23.198585    5648 start.go:165] libmachine.API.Create for "multinode-20220516223121-2444" (driver="docker")
	I0516 22:38:23.198670    5648 client.go:168] LocalClient.Create starting
	I0516 22:38:23.198723    5648 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:38:23.199441    5648 main.go:134] libmachine: Decoding PEM data...
	I0516 22:38:23.199494    5648 main.go:134] libmachine: Parsing certificate...
	I0516 22:38:23.199585    5648 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:38:23.199585    5648 main.go:134] libmachine: Decoding PEM data...
	I0516 22:38:23.199585    5648 main.go:134] libmachine: Parsing certificate...
	I0516 22:38:23.211502    5648 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:38:24.258866    5648 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:38:24.258866    5648 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0473566s)
	I0516 22:38:24.265872    5648 network_create.go:272] running [docker network inspect multinode-20220516223121-2444] to gather additional debugging logs...
	I0516 22:38:24.265872    5648 cli_runner.go:164] Run: docker network inspect multinode-20220516223121-2444
	W0516 22:38:25.283043    5648 cli_runner.go:211] docker network inspect multinode-20220516223121-2444 returned with exit code 1
	I0516 22:38:25.283080    5648 cli_runner.go:217] Completed: docker network inspect multinode-20220516223121-2444: (1.017056s)
	I0516 22:38:25.283156    5648 network_create.go:275] error running [docker network inspect multinode-20220516223121-2444]: docker network inspect multinode-20220516223121-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220516223121-2444
	I0516 22:38:25.283362    5648 network_create.go:277] output of [docker network inspect multinode-20220516223121-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220516223121-2444
	
	** /stderr **
	I0516 22:38:25.291738    5648 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:38:26.301111    5648 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0093654s)
	I0516 22:38:26.317464    5648 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000594140] amended:true}} dirty:map[192.168.49.0:0xc000594140 192.168.58.0:0xc0004ec678 192.168.67.0:0xc0004ec730 192.168.76.0:0xc0005941e0] misses:2}
	I0516 22:38:26.317464    5648 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:38:26.333658    5648 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000594140] amended:true}} dirty:map[192.168.49.0:0xc000594140 192.168.58.0:0xc0004ec678 192.168.67.0:0xc0004ec730 192.168.76.0:0xc0005941e0] misses:3}
	I0516 22:38:26.333658    5648 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:38:26.349001    5648 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000594140 192.168.58.0:0xc0004ec678 192.168.67.0:0xc0004ec730 192.168.76.0:0xc0005941e0] amended:false}} dirty:map[] misses:0}
	I0516 22:38:26.349001    5648 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:38:26.365502    5648 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000594140 192.168.58.0:0xc0004ec678 192.168.67.0:0xc0004ec730 192.168.76.0:0xc0005941e0] amended:false}} dirty:map[] misses:0}
	I0516 22:38:26.365502    5648 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:38:26.378143    5648 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000594140 192.168.58.0:0xc0004ec678 192.168.67.0:0xc0004ec730 192.168.76.0:0xc0005941e0] amended:true}} dirty:map[192.168.49.0:0xc000594140 192.168.58.0:0xc0004ec678 192.168.67.0:0xc0004ec730 192.168.76.0:0xc0005941e0 192.168.85.0:0xc0005943e0] misses:0}
	I0516 22:38:26.378143    5648 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:38:26.378143    5648 network_create.go:115] attempt to create docker network multinode-20220516223121-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:38:26.387563    5648 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444
	W0516 22:38:27.441711    5648 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444 returned with exit code 1
	I0516 22:38:27.441917    5648 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: (1.0538696s)
	E0516 22:38:27.441989    5648 network_create.go:104] error while trying to create docker network multinode-20220516223121-2444 192.168.85.0/24: create docker network multinode-20220516223121-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 59b283e13ff3697235f97902ded221cb14dbcc19b856ccc6cdbab5ad7d3ce62f (br-59b283e13ff3): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:38:27.442014    5648 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220516223121-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 59b283e13ff3697235f97902ded221cb14dbcc19b856ccc6cdbab5ad7d3ce62f (br-59b283e13ff3): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220516223121-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 59b283e13ff3697235f97902ded221cb14dbcc19b856ccc6cdbab5ad7d3ce62f (br-59b283e13ff3): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:38:27.457458    5648 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:38:28.494953    5648 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0374872s)
	I0516 22:38:28.503022    5648 cli_runner.go:164] Run: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:38:29.541929    5648 cli_runner.go:211] docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:38:29.541983    5648 cli_runner.go:217] Completed: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: (1.038625s)
	I0516 22:38:29.541983    5648 client.go:171] LocalClient.Create took 6.3432653s
	I0516 22:38:31.560044    5648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:38:31.566723    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:38:32.605884    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:38:32.605884    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0391002s)
	I0516 22:38:32.606072    5648 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:32.884022    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:38:33.921993    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:38:33.921993    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0379627s)
	W0516 22:38:33.921993    5648 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:38:33.921993    5648 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:33.941940    5648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:38:33.947974    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:38:34.977396    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:38:34.977396    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0283405s)
	I0516 22:38:34.977396    5648 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:35.202258    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:38:36.217920    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:38:36.217920    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0155242s)
	W0516 22:38:36.218001    5648 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:38:36.218001    5648 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:36.218001    5648 start.go:134] duration metric: createHost completed in 13.023157s
	I0516 22:38:36.229089    5648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:38:36.236178    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:38:37.255161    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:38:37.255161    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0189754s)
	I0516 22:38:37.255161    5648 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:37.592304    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:38:38.604610    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:38:38.604639    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0121923s)
	W0516 22:38:38.604903    5648 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:38:38.604920    5648 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:38.616030    5648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:38:38.622167    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:38:39.654721    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:38:39.654721    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.0325458s)
	I0516 22:38:39.654721    5648 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:40.010032    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444
	W0516 22:38:41.059542    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444 returned with exit code 1
	I0516 22:38:41.059542    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: (1.049502s)
	W0516 22:38:41.059542    5648 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	W0516 22:38:41.059542    5648 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220516223121-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220516223121-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	I0516 22:38:41.059542    5648 fix.go:57] fixHost completed within 45.8286489s
	I0516 22:38:41.059542    5648 start.go:81] releasing machines lock for "multinode-20220516223121-2444", held for 45.8292133s
	W0516 22:38:41.059542    5648 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-20220516223121-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220516223121-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	I0516 22:38:41.065325    5648 out.go:177] 
	W0516 22:38:41.067775    5648 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444 container: docker volume create multinode-20220516223121-2444 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444: read-only file system
	
	W0516 22:38:41.067930    5648 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 22:38:41.067930    5648 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 22:38:41.071070    5648 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:354: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-20220516223121-2444 --wait=true -v=8 --alsologtostderr --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220516223121-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220516223121-2444: exit status 1 (1.1131886s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444: exit status 7 (2.7642114s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:38:45.138455    5084 status.go:247] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220516223121-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (118.31s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (170.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220516223121-2444
multinode_test.go:450: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220516223121-2444-m01 --driver=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220516223121-2444-m01 --driver=docker: exit status 60 (1m17.6846908s)

                                                
                                                
-- stdout --
	* [multinode-20220516223121-2444-m01] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node multinode-20220516223121-2444-m01 in cluster multinode-20220516223121-2444-m01
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "multinode-20220516223121-2444-m01" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:39:03.306651    7552 network_create.go:104] error while trying to create docker network multinode-20220516223121-2444-m01 192.168.76.0/24: create docker network multinode-20220516223121-2444-m01 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444-m01: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 32727d4dcfe289e0072a333b91457027aa9dd0a7e89c44be25c5833af1cfa753 (br-32727d4dcfe2): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220516223121-2444-m01 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444-m01: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 32727d4dcfe289e0072a333b91457027aa9dd0a7e89c44be25c5833af1cfa753 (br-32727d4dcfe2): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444-m01 container: docker volume create multinode-20220516223121-2444-m01 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444-m01 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444-m01: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444-m01': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444-m01: read-only file system
	
	E0516 22:39:49.729060    7552 network_create.go:104] error while trying to create docker network multinode-20220516223121-2444-m01 192.168.85.0/24: create docker network multinode-20220516223121-2444-m01 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444-m01: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 24a46573497975b6835f0020a8a594d81ae7d8423063382240eb11f86b1ef411 (br-24a465734979): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220516223121-2444-m01 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444-m01: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 24a46573497975b6835f0020a8a594d81ae7d8423063382240eb11f86b1ef411 (br-24a465734979): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220516223121-2444-m01" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444-m01 container: docker volume create multinode-20220516223121-2444-m01 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444-m01 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444-m01: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444-m01': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444-m01: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444-m01 container: docker volume create multinode-20220516223121-2444-m01 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444-m01 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444-m01: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444-m01': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444-m01: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220516223121-2444-m02 --driver=docker
multinode_test.go:458: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220516223121-2444-m02 --driver=docker: exit status 60 (1m17.6508401s)

                                                
                                                
-- stdout --
	* [multinode-20220516223121-2444-m02] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node multinode-20220516223121-2444-m02 in cluster multinode-20220516223121-2444-m02
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "multinode-20220516223121-2444-m02" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:40:20.837601    5524 network_create.go:104] error while trying to create docker network multinode-20220516223121-2444-m02 192.168.76.0/24: create docker network multinode-20220516223121-2444-m02 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444-m02: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8fc18b202a6e088b372504f083877f5631f8da756809d8888b51471df4e2e00a (br-8fc18b202a6e): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220516223121-2444-m02 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444-m02: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8fc18b202a6e088b372504f083877f5631f8da756809d8888b51471df4e2e00a (br-8fc18b202a6e): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444-m02 container: docker volume create multinode-20220516223121-2444-m02 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444-m02 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444-m02: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444-m02': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444-m02: read-only file system
	
	E0516 22:41:07.423120    5524 network_create.go:104] error while trying to create docker network multinode-20220516223121-2444-m02 192.168.85.0/24: create docker network multinode-20220516223121-2444-m02 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444-m02: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 57efee2e92506813669d6eb4157d13e1758c3e79dd93842b326db095d2b87d59 (br-57efee2e9250): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220516223121-2444-m02 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220516223121-2444-m02: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 57efee2e92506813669d6eb4157d13e1758c3e79dd93842b326db095d2b87d59 (br-57efee2e9250): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220516223121-2444-m02" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444-m02 container: docker volume create multinode-20220516223121-2444-m02 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444-m02 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444-m02: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444-m02': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444-m02: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220516223121-2444-m02 container: docker volume create multinode-20220516223121-2444-m02 --label name.minikube.sigs.k8s.io=multinode-20220516223121-2444-m02 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220516223121-2444-m02: error while creating volume root path '/var/lib/docker/volumes/multinode-20220516223121-2444-m02': mkdir /var/lib/docker/volumes/multinode-20220516223121-2444-m02: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
multinode_test.go:460: failed to start profile. args "out/minikube-windows-amd64.exe start -p multinode-20220516223121-2444-m02 --driver=docker" : exit status 60
multinode_test.go:465: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220516223121-2444
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20220516223121-2444: exit status 80 (3.0453461s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_23.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-20220516223121-2444-m02
multinode_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-20220516223121-2444-m02: (8.0979292s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ValidateNameConflict]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220516223121-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220516223121-2444: exit status 1 (1.1121486s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220516223121-2444 -n multinode-20220516223121-2444: exit status 7 (2.8448203s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:41:35.935001    4240 status.go:247] status error: host: state: unknown state "multinode-20220516223121-2444": docker container inspect multinode-20220516223121-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220516223121-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220516223121-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (170.79s)

                                                
                                    
x
+
TestPreload (90.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220516224147-2444 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0
preload_test.go:48: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p test-preload-20220516224147-2444 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0: exit status 60 (1m18.0573267s)

                                                
                                                
-- stdout --
	* [test-preload-20220516224147-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node test-preload-20220516224147-2444 in cluster test-preload-20220516224147-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "test-preload-20220516224147-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:41:47.988268    1828 out.go:296] Setting OutFile to fd 880 ...
	I0516 22:41:48.050774    1828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:41:48.050774    1828 out.go:309] Setting ErrFile to fd 1008...
	I0516 22:41:48.050774    1828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:41:48.061209    1828 out.go:303] Setting JSON to false
	I0516 22:41:48.063131    1828 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4020,"bootTime":1652736888,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:41:48.064103    1828 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:41:48.068772    1828 out.go:177] * [test-preload-20220516224147-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:41:48.070891    1828 notify.go:193] Checking for updates...
	I0516 22:41:48.070891    1828 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:41:48.078017    1828 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:41:48.080495    1828 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:41:48.083025    1828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:41:48.085827    1828 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:41:48.085827    1828 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:41:50.587457    1828 docker.go:137] docker version: linux-20.10.14
	I0516 22:41:50.595840    1828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:41:52.604015    1828 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0081597s)
	I0516 22:41:52.604737    1828 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:41:51.5825195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:41:52.609938    1828 out.go:177] * Using the docker driver based on user configuration
	I0516 22:41:52.611955    1828 start.go:284] selected driver: docker
	I0516 22:41:52.612701    1828 start.go:806] validating driver "docker" against <nil>
	I0516 22:41:52.612701    1828 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:41:52.742477    1828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:41:54.772281    1828 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0296911s)
	I0516 22:41:54.772570    1828 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:41:53.7518699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:41:54.772570    1828 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 22:41:54.773703    1828 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 22:41:54.776262    1828 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 22:41:54.778616    1828 cni.go:95] Creating CNI manager for ""
	I0516 22:41:54.778616    1828 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 22:41:54.778683    1828 start_flags.go:306] config:
	{Name:test-preload-20220516224147-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220516224147-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:41:54.781048    1828 out.go:177] * Starting control plane node test-preload-20220516224147-2444 in cluster test-preload-20220516224147-2444
	I0516 22:41:54.783824    1828 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:41:54.788238    1828 out.go:177] * Pulling base image ...
	I0516 22:41:54.791677    1828 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0516 22:41:54.791677    1828 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:41:54.791942    1828 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\test-preload-20220516224147-2444\config.json ...
	I0516 22:41:54.792138    1828 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0
	I0516 22:41:54.792177    1828 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0
	I0516 22:41:54.792177    1828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\test-preload-20220516224147-2444\config.json: {Name:mkd6bd1809a1a2c84a22a3b83d42c6660579e581 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 22:41:54.792177    1828 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0
	I0516 22:41:54.792138    1828 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0
	I0516 22:41:54.792378    1828 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns:1.6.5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5
	I0516 22:41:54.792138    1828 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0516 22:41:54.792177    1828 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause:3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1
	I0516 22:41:54.793383    1828 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0
	I0516 22:41:54.973073    1828 cache.go:107] acquiring lock: {Name:mk7af4d324ae5378e4084d0d909beff30d29e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:41:54.973073    1828 cache.go:107] acquiring lock: {Name:mkef9a3d9e3cbb1fe114c12bec441ddb11fca0c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:41:54.973073    1828 cache.go:107] acquiring lock: {Name:mk2bed4c2f349144087ca9b4676d08589a5f3b25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:41:54.973073    1828 cache.go:107] acquiring lock: {Name:mkb269f15b2e3b2569308dbf84de26df267b2fcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:41:54.973073    1828 cache.go:107] acquiring lock: {Name:mk93ccdec90972c05247bea23df9b97c54ef0291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:41:54.973073    1828 cache.go:107] acquiring lock: {Name:mkef49659bc6e08b20a8521eb6ce4fb712ad39c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:41:54.973073    1828 cache.go:107] acquiring lock: {Name:mk965b06109155c0e187b8b69e2b0548d9bccb3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:41:54.973073    1828 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0516 22:41:54.973073    1828 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0516 22:41:54.973073    1828 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 179.6889ms
	I0516 22:41:54.973073    1828 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0516 22:41:54.973073    1828 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0516 22:41:54.974065    1828 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0516 22:41:54.974065    1828 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0516 22:41:54.974065    1828 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0516 22:41:54.975064    1828 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0516 22:41:54.982063    1828 cache.go:107] acquiring lock: {Name:mkfe379c4c474168d5a5fd2dde0e9bf1347e993b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:41:54.982063    1828 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0516 22:41:54.982063    1828 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error response from daemon: reference does not exist
	I0516 22:41:54.996066    1828 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error response from daemon: reference does not exist
	I0516 22:41:55.013068    1828 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: reference does not exist
	I0516 22:41:55.033115    1828 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error response from daemon: reference does not exist
	I0516 22:41:55.044094    1828 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
	I0516 22:41:55.064062    1828 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error response from daemon: reference does not exist
	I0516 22:41:55.084800    1828 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error response from daemon: reference does not exist
	W0516 22:41:55.228082    1828 image.go:190] authn lookup for k8s.gcr.io/kube-proxy:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0516 22:41:55.479763    1828 image.go:190] authn lookup for k8s.gcr.io/kube-controller-manager:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0516 22:41:55.511964    1828 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0
	I0516 22:41:55.701113    1828 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0
	W0516 22:41:55.709637    1828 image.go:190] authn lookup for k8s.gcr.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0516 22:41:55.881102    1828 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1
	I0516 22:41:55.932608    1828 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1 exists
	I0516 22:41:55.932608    1828 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\pause_3.1" took 1.1392161s
	I0516 22:41:55.932608    1828 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1 succeeded
	I0516 22:41:55.953213    1828 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:41:55.953213    1828 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:41:55.953213    1828 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:41:55.953213    1828 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:41:55.953213    1828 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:41:55.953905    1828 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:41:55.954051    1828 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:41:55.954051    1828 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:41:55.954114    1828 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	W0516 22:41:55.980457    1828 image.go:190] authn lookup for k8s.gcr.io/kube-apiserver:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0516 22:41:56.230876    1828 image.go:190] authn lookup for k8s.gcr.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0516 22:41:56.232011    1828 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0
	I0516 22:41:56.453084    1828 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0 exists
	I0516 22:41:56.453290    1828 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-controller-manager_v1.17.0" took 1.6598943s
	I0516 22:41:56.453290    1828 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0 succeeded
	I0516 22:41:56.470328    1828 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0
	W0516 22:41:56.478856    1828 image.go:190] authn lookup for k8s.gcr.io/coredns:1.6.5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0516 22:41:56.697895    1828 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5
	W0516 22:41:56.744932    1828 image.go:190] authn lookup for k8s.gcr.io/kube-scheduler:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0516 22:41:56.828948    1828 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0 exists
	I0516 22:41:56.829513    1828 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-proxy_v1.17.0" took 2.0371197s
	I0516 22:41:56.829513    1828 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0 succeeded
	I0516 22:41:56.992223    1828 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0
	I0516 22:41:57.062503    1828 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5 exists
	I0516 22:41:57.062503    1828 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\coredns_1.6.5" took 2.2701076s
	I0516 22:41:57.062503    1828 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5 succeeded
	I0516 22:41:57.083783    1828 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0 exists
	I0516 22:41:57.084885    1828 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-apiserver_v1.17.0" took 2.2925755s
	I0516 22:41:57.084928    1828 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0 succeeded
	I0516 22:41:57.575819    1828 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0 exists
	I0516 22:41:57.575819    1828 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-scheduler_v1.17.0" took 2.7836214s
	I0516 22:41:57.575819    1828 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0 succeeded
	I0516 22:41:57.795189    1828 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0 exists
	I0516 22:41:57.795367    1828 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\etcd_3.4.3-0" took 3.003095s
	I0516 22:41:57.795418    1828 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0 succeeded
	I0516 22:41:57.795418    1828 cache.go:87] Successfully saved all images to host disk.
	I0516 22:41:58.316790    1828 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:41:58.316959    1828 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:41:58.317148    1828 start.go:352] acquiring machines lock for test-preload-20220516224147-2444: {Name:mk0208bba8cfe7e07d4200c6c925bc7fd714e78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:41:58.317148    1828 start.go:356] acquired machines lock for "test-preload-20220516224147-2444" in 0s
	I0516 22:41:58.317148    1828 start.go:91] Provisioning new machine with config: &{Name:test-preload-20220516224147-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220516224147-2444 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 22:41:58.317676    1828 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:41:58.320635    1828 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:41:58.321230    1828 start.go:165] libmachine.API.Create for "test-preload-20220516224147-2444" (driver="docker")
	I0516 22:41:58.321367    1828 client.go:168] LocalClient.Create starting
	I0516 22:41:58.321948    1828 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:41:58.321948    1828 main.go:134] libmachine: Decoding PEM data...
	I0516 22:41:58.321948    1828 main.go:134] libmachine: Parsing certificate...
	I0516 22:41:58.322592    1828 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:41:58.322592    1828 main.go:134] libmachine: Decoding PEM data...
	I0516 22:41:58.322592    1828 main.go:134] libmachine: Parsing certificate...
	I0516 22:41:58.332934    1828 cli_runner.go:164] Run: docker network inspect test-preload-20220516224147-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:41:59.433275    1828 cli_runner.go:211] docker network inspect test-preload-20220516224147-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:41:59.433345    1828 cli_runner.go:217] Completed: docker network inspect test-preload-20220516224147-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.100175s)
	I0516 22:41:59.442344    1828 network_create.go:272] running [docker network inspect test-preload-20220516224147-2444] to gather additional debugging logs...
	I0516 22:41:59.442344    1828 cli_runner.go:164] Run: docker network inspect test-preload-20220516224147-2444
	W0516 22:42:00.512280    1828 cli_runner.go:211] docker network inspect test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:00.512280    1828 cli_runner.go:217] Completed: docker network inspect test-preload-20220516224147-2444: (1.0697927s)
	I0516 22:42:00.512280    1828 network_create.go:275] error running [docker network inspect test-preload-20220516224147-2444]: docker network inspect test-preload-20220516224147-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220516224147-2444
	I0516 22:42:00.512280    1828 network_create.go:277] output of [docker network inspect test-preload-20220516224147-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220516224147-2444
	
	** /stderr **
	I0516 22:42:00.519308    1828 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:42:01.556284    1828 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0365122s)
	I0516 22:42:01.575715    1828 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0003ac098] misses:0}
	I0516 22:42:01.576716    1828 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:42:01.576972    1828 network_create.go:115] attempt to create docker network test-preload-20220516224147-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:42:01.585143    1828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444
	W0516 22:42:02.655187    1828 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:02.655221    1828 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444: (1.0698149s)
	W0516 22:42:02.655318    1828 network_create.go:107] failed to create docker network test-preload-20220516224147-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:42:02.674968    1828 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003ac098] amended:false}} dirty:map[] misses:0}
	I0516 22:42:02.675823    1828 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:42:02.695110    1828 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003ac098] amended:true}} dirty:map[192.168.49.0:0xc0003ac098 192.168.58.0:0xc0014482b8] misses:0}
	I0516 22:42:02.696125    1828 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:42:02.696125    1828 network_create.go:115] attempt to create docker network test-preload-20220516224147-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:42:02.702402    1828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444
	W0516 22:42:03.717342    1828 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:03.717342    1828 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444: (1.0147352s)
	W0516 22:42:03.717690    1828 network_create.go:107] failed to create docker network test-preload-20220516224147-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:42:03.737066    1828 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003ac098] amended:true}} dirty:map[192.168.49.0:0xc0003ac098 192.168.58.0:0xc0014482b8] misses:1}
	I0516 22:42:03.737460    1828 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:42:03.754297    1828 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003ac098] amended:true}} dirty:map[192.168.49.0:0xc0003ac098 192.168.58.0:0xc0014482b8 192.168.67.0:0xc0003ac240] misses:1}
	I0516 22:42:03.754297    1828 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:42:03.754297    1828 network_create.go:115] attempt to create docker network test-preload-20220516224147-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:42:03.761387    1828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444
	W0516 22:42:04.793430    1828 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:04.793621    1828 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444: (1.0320359s)
	W0516 22:42:04.793737    1828 network_create.go:107] failed to create docker network test-preload-20220516224147-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:42:04.813797    1828 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003ac098] amended:true}} dirty:map[192.168.49.0:0xc0003ac098 192.168.58.0:0xc0014482b8 192.168.67.0:0xc0003ac240] misses:2}
	I0516 22:42:04.813797    1828 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:42:04.834303    1828 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003ac098] amended:true}} dirty:map[192.168.49.0:0xc0003ac098 192.168.58.0:0xc0014482b8 192.168.67.0:0xc0003ac240 192.168.76.0:0xc0006e20e8] misses:2}
	I0516 22:42:04.834303    1828 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:42:04.834303    1828 network_create.go:115] attempt to create docker network test-preload-20220516224147-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:42:04.843574    1828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444
	W0516 22:42:05.850808    1828 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:05.850985    1828 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444: (1.0071142s)
	E0516 22:42:05.851131    1828 network_create.go:104] error while trying to create docker network test-preload-20220516224147-2444 192.168.76.0/24: create docker network test-preload-20220516224147-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 83c3123cd88d629acc3222b6a487febcb8a17cb153a15bc24acc42334228ac07 (br-83c3123cd88d): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:42:05.851516    1828 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network test-preload-20220516224147-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 83c3123cd88d629acc3222b6a487febcb8a17cb153a15bc24acc42334228ac07 (br-83c3123cd88d): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network test-preload-20220516224147-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 83c3123cd88d629acc3222b6a487febcb8a17cb153a15bc24acc42334228ac07 (br-83c3123cd88d): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:42:05.869489    1828 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:42:06.885483    1828 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0159026s)
	I0516 22:42:06.894208    1828 cli_runner.go:164] Run: docker volume create test-preload-20220516224147-2444 --label name.minikube.sigs.k8s.io=test-preload-20220516224147-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:42:07.950816    1828 cli_runner.go:211] docker volume create test-preload-20220516224147-2444 --label name.minikube.sigs.k8s.io=test-preload-20220516224147-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:42:07.950816    1828 cli_runner.go:217] Completed: docker volume create test-preload-20220516224147-2444 --label name.minikube.sigs.k8s.io=test-preload-20220516224147-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0566008s)
	I0516 22:42:07.950816    1828 client.go:171] LocalClient.Create took 9.6293769s
	I0516 22:42:09.973401    1828 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:42:09.980406    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444
	W0516 22:42:10.975674    1828 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:10.975674    1828 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:11.264118    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444
	W0516 22:42:12.272892    1828 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:12.272892    1828 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: (1.0087668s)
	W0516 22:42:12.272892    1828 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	
	W0516 22:42:12.272892    1828 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:12.284749    1828 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:42:12.292288    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444
	W0516 22:42:13.321482    1828 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:13.321482    1828 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: (1.0291863s)
	I0516 22:42:13.321482    1828 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:13.622845    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444
	W0516 22:42:14.643506    1828 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:14.643576    1828 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: (1.0205583s)
	W0516 22:42:14.643576    1828 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	
	W0516 22:42:14.643576    1828 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:14.643576    1828 start.go:134] duration metric: createHost completed in 16.3257764s
	I0516 22:42:14.643576    1828 start.go:81] releasing machines lock for "test-preload-20220516224147-2444", held for 16.3263044s
	W0516 22:42:14.643576    1828 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for test-preload-20220516224147-2444 container: docker volume create test-preload-20220516224147-2444 --label name.minikube.sigs.k8s.io=test-preload-20220516224147-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220516224147-2444: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220516224147-2444': mkdir /var/lib/docker/volumes/test-preload-20220516224147-2444: read-only file system
	I0516 22:42:14.667061    1828 cli_runner.go:164] Run: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}
	W0516 22:42:15.671151    1828 cli_runner.go:211] docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:42:15.671151    1828 cli_runner.go:217] Completed: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: (1.0039697s)
	I0516 22:42:15.671151    1828 delete.go:82] Unable to get host status for test-preload-20220516224147-2444, assuming it has already been deleted: state: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	W0516 22:42:15.671151    1828 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for test-preload-20220516224147-2444 container: docker volume create test-preload-20220516224147-2444 --label name.minikube.sigs.k8s.io=test-preload-20220516224147-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220516224147-2444: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220516224147-2444': mkdir /var/lib/docker/volumes/test-preload-20220516224147-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for test-preload-20220516224147-2444 container: docker volume create test-preload-20220516224147-2444 --label name.minikube.sigs.k8s.io=test-preload-20220516224147-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220516224147-2444: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220516224147-2444': mkdir /var/lib/docker/volumes/test-preload-20220516224147-2444: read-only file system
	
	I0516 22:42:15.671151    1828 start.go:623] Will try again in 5 seconds ...
	I0516 22:42:20.678665    1828 start.go:352] acquiring machines lock for test-preload-20220516224147-2444: {Name:mk0208bba8cfe7e07d4200c6c925bc7fd714e78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:42:20.679141    1828 start.go:356] acquired machines lock for "test-preload-20220516224147-2444" in 117.8µs
	I0516 22:42:20.679141    1828 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:42:20.679141    1828 fix.go:55] fixHost starting: 
	I0516 22:42:20.695396    1828 cli_runner.go:164] Run: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}
	W0516 22:42:21.717337    1828 cli_runner.go:211] docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:42:21.717504    1828 cli_runner.go:217] Completed: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: (1.0219332s)
	I0516 22:42:21.717504    1828 fix.go:103] recreateIfNeeded on test-preload-20220516224147-2444: state= err=unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:21.717504    1828 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:42:21.738664    1828 out.go:177] * docker "test-preload-20220516224147-2444" container is missing, will recreate.
	I0516 22:42:21.742044    1828 delete.go:124] DEMOLISHING test-preload-20220516224147-2444 ...
	I0516 22:42:21.756613    1828 cli_runner.go:164] Run: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}
	W0516 22:42:22.761669    1828 cli_runner.go:211] docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:42:22.761669    1828 cli_runner.go:217] Completed: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: (1.0050493s)
	W0516 22:42:22.761669    1828 stop.go:75] unable to get state: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:22.761669    1828 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:22.779218    1828 cli_runner.go:164] Run: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}
	W0516 22:42:23.792603    1828 cli_runner.go:211] docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:42:23.792603    1828 cli_runner.go:217] Completed: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: (1.0132258s)
	I0516 22:42:23.792603    1828 delete.go:82] Unable to get host status for test-preload-20220516224147-2444, assuming it has already been deleted: state: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:23.802827    1828 cli_runner.go:164] Run: docker container inspect -f {{.Id}} test-preload-20220516224147-2444
	W0516 22:42:24.829793    1828 cli_runner.go:211] docker container inspect -f {{.Id}} test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:24.829793    1828 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} test-preload-20220516224147-2444: (1.0269577s)
	I0516 22:42:24.829793    1828 kic.go:356] could not find the container test-preload-20220516224147-2444 to remove it. will try anyways
	I0516 22:42:24.839534    1828 cli_runner.go:164] Run: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}
	W0516 22:42:25.864684    1828 cli_runner.go:211] docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:42:25.864684    1828 cli_runner.go:217] Completed: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: (1.0250968s)
	W0516 22:42:25.864684    1828 oci.go:84] error getting container status, will try to delete anyways: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:25.876515    1828 cli_runner.go:164] Run: docker exec --privileged -t test-preload-20220516224147-2444 /bin/bash -c "sudo init 0"
	W0516 22:42:26.883347    1828 cli_runner.go:211] docker exec --privileged -t test-preload-20220516224147-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:42:26.883347    1828 cli_runner.go:217] Completed: docker exec --privileged -t test-preload-20220516224147-2444 /bin/bash -c "sudo init 0": (1.0066974s)
	I0516 22:42:26.883347    1828 oci.go:641] error shutdown test-preload-20220516224147-2444: docker exec --privileged -t test-preload-20220516224147-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:27.896878    1828 cli_runner.go:164] Run: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}
	W0516 22:42:28.920899    1828 cli_runner.go:211] docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:42:28.920931    1828 cli_runner.go:217] Completed: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: (1.0239076s)
	I0516 22:42:28.921052    1828 oci.go:653] temporary error verifying shutdown: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:28.921081    1828 oci.go:655] temporary error: container test-preload-20220516224147-2444 status is  but expect it to be exited
	I0516 22:42:28.921173    1828 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:29.396081    1828 cli_runner.go:164] Run: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}
	W0516 22:42:30.426331    1828 cli_runner.go:211] docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:42:30.426441    1828 cli_runner.go:217] Completed: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: (1.0299921s)
	I0516 22:42:30.426531    1828 oci.go:653] temporary error verifying shutdown: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:30.426558    1828 oci.go:655] temporary error: container test-preload-20220516224147-2444 status is  but expect it to be exited
	I0516 22:42:30.426558    1828 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:31.331703    1828 cli_runner.go:164] Run: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}
	W0516 22:42:32.378014    1828 cli_runner.go:211] docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:42:32.378014    1828 cli_runner.go:217] Completed: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: (1.0463028s)
	I0516 22:42:32.378014    1828 oci.go:653] temporary error verifying shutdown: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:32.378014    1828 oci.go:655] temporary error: container test-preload-20220516224147-2444 status is  but expect it to be exited
	I0516 22:42:32.378014    1828 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:33.041141    1828 cli_runner.go:164] Run: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}
	W0516 22:42:34.059479    1828 cli_runner.go:211] docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:42:34.059610    1828 cli_runner.go:217] Completed: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: (1.0181683s)
	I0516 22:42:34.059610    1828 oci.go:653] temporary error verifying shutdown: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:34.059610    1828 oci.go:655] temporary error: container test-preload-20220516224147-2444 status is  but expect it to be exited
	I0516 22:42:34.059610    1828 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:35.178598    1828 cli_runner.go:164] Run: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}
	W0516 22:42:36.204892    1828 cli_runner.go:211] docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:42:36.204953    1828 cli_runner.go:217] Completed: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: (1.0261046s)
	I0516 22:42:36.205028    1828 oci.go:653] temporary error verifying shutdown: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:36.205028    1828 oci.go:655] temporary error: container test-preload-20220516224147-2444 status is  but expect it to be exited
	I0516 22:42:36.205028    1828 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:37.738709    1828 cli_runner.go:164] Run: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}
	W0516 22:42:38.774751    1828 cli_runner.go:211] docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:42:38.774974    1828 cli_runner.go:217] Completed: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: (1.0360338s)
	I0516 22:42:38.775151    1828 oci.go:653] temporary error verifying shutdown: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:38.775199    1828 oci.go:655] temporary error: container test-preload-20220516224147-2444 status is  but expect it to be exited
	I0516 22:42:38.775251    1828 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:41.839266    1828 cli_runner.go:164] Run: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}
	W0516 22:42:42.859382    1828 cli_runner.go:211] docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:42:42.859583    1828 cli_runner.go:217] Completed: docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: (1.0201083s)
	I0516 22:42:42.859739    1828 oci.go:653] temporary error verifying shutdown: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:42.859818    1828 oci.go:655] temporary error: container test-preload-20220516224147-2444 status is  but expect it to be exited
	I0516 22:42:42.859847    1828 oci.go:88] couldn't shut down test-preload-20220516224147-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	 
	I0516 22:42:42.868829    1828 cli_runner.go:164] Run: docker rm -f -v test-preload-20220516224147-2444
	I0516 22:42:43.891630    1828 cli_runner.go:217] Completed: docker rm -f -v test-preload-20220516224147-2444: (1.0226091s)
	I0516 22:42:43.899970    1828 cli_runner.go:164] Run: docker container inspect -f {{.Id}} test-preload-20220516224147-2444
	W0516 22:42:44.926072    1828 cli_runner.go:211] docker container inspect -f {{.Id}} test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:44.926072    1828 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} test-preload-20220516224147-2444: (1.025785s)
	I0516 22:42:44.936165    1828 cli_runner.go:164] Run: docker network inspect test-preload-20220516224147-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:42:45.954175    1828 cli_runner.go:211] docker network inspect test-preload-20220516224147-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:42:45.954229    1828 cli_runner.go:217] Completed: docker network inspect test-preload-20220516224147-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0179125s)
	I0516 22:42:45.963327    1828 network_create.go:272] running [docker network inspect test-preload-20220516224147-2444] to gather additional debugging logs...
	I0516 22:42:45.963327    1828 cli_runner.go:164] Run: docker network inspect test-preload-20220516224147-2444
	W0516 22:42:46.986053    1828 cli_runner.go:211] docker network inspect test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:46.986053    1828 cli_runner.go:217] Completed: docker network inspect test-preload-20220516224147-2444: (1.0227184s)
	I0516 22:42:46.986053    1828 network_create.go:275] error running [docker network inspect test-preload-20220516224147-2444]: docker network inspect test-preload-20220516224147-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220516224147-2444
	I0516 22:42:46.986053    1828 network_create.go:277] output of [docker network inspect test-preload-20220516224147-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220516224147-2444
	
	** /stderr **
	W0516 22:42:46.987330    1828 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:42:46.987330    1828 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:42:47.992191    1828 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:42:47.995446    1828 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:42:47.996222    1828 start.go:165] libmachine.API.Create for "test-preload-20220516224147-2444" (driver="docker")
	I0516 22:42:47.996258    1828 client.go:168] LocalClient.Create starting
	I0516 22:42:47.996436    1828 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:42:47.996436    1828 main.go:134] libmachine: Decoding PEM data...
	I0516 22:42:47.996436    1828 main.go:134] libmachine: Parsing certificate...
	I0516 22:42:47.997253    1828 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:42:47.997278    1828 main.go:134] libmachine: Decoding PEM data...
	I0516 22:42:47.997278    1828 main.go:134] libmachine: Parsing certificate...
	I0516 22:42:48.007740    1828 cli_runner.go:164] Run: docker network inspect test-preload-20220516224147-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:42:49.036552    1828 cli_runner.go:211] docker network inspect test-preload-20220516224147-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:42:49.036552    1828 cli_runner.go:217] Completed: docker network inspect test-preload-20220516224147-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0288046s)
	I0516 22:42:49.046444    1828 network_create.go:272] running [docker network inspect test-preload-20220516224147-2444] to gather additional debugging logs...
	I0516 22:42:49.046444    1828 cli_runner.go:164] Run: docker network inspect test-preload-20220516224147-2444
	W0516 22:42:50.074613    1828 cli_runner.go:211] docker network inspect test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:50.074613    1828 cli_runner.go:217] Completed: docker network inspect test-preload-20220516224147-2444: (1.0281612s)
	I0516 22:42:50.074613    1828 network_create.go:275] error running [docker network inspect test-preload-20220516224147-2444]: docker network inspect test-preload-20220516224147-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220516224147-2444
	I0516 22:42:50.074613    1828 network_create.go:277] output of [docker network inspect test-preload-20220516224147-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220516224147-2444
	
	** /stderr **
	I0516 22:42:50.084948    1828 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:42:51.112324    1828 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0273681s)
	I0516 22:42:51.128964    1828 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003ac098] amended:true}} dirty:map[192.168.49.0:0xc0003ac098 192.168.58.0:0xc0014482b8 192.168.67.0:0xc0003ac240 192.168.76.0:0xc0006e20e8] misses:2}
	I0516 22:42:51.128964    1828 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:42:51.142964    1828 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003ac098] amended:true}} dirty:map[192.168.49.0:0xc0003ac098 192.168.58.0:0xc0014482b8 192.168.67.0:0xc0003ac240 192.168.76.0:0xc0006e20e8] misses:3}
	I0516 22:42:51.142964    1828 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:42:51.156934    1828 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003ac098 192.168.58.0:0xc0014482b8 192.168.67.0:0xc0003ac240 192.168.76.0:0xc0006e20e8] amended:false}} dirty:map[] misses:0}
	I0516 22:42:51.157395    1828 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:42:51.175269    1828 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003ac098 192.168.58.0:0xc0014482b8 192.168.67.0:0xc0003ac240 192.168.76.0:0xc0006e20e8] amended:false}} dirty:map[] misses:0}
	I0516 22:42:51.175425    1828 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:42:51.189339    1828 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003ac098 192.168.58.0:0xc0014482b8 192.168.67.0:0xc0003ac240 192.168.76.0:0xc0006e20e8] amended:true}} dirty:map[192.168.49.0:0xc0003ac098 192.168.58.0:0xc0014482b8 192.168.67.0:0xc0003ac240 192.168.76.0:0xc0006e20e8 192.168.85.0:0xc001448600] misses:0}
	I0516 22:42:51.189339    1828 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:42:51.189339    1828 network_create.go:115] attempt to create docker network test-preload-20220516224147-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:42:51.198334    1828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444
	W0516 22:42:52.220061    1828 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:52.220141    1828 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444: (1.021482s)
	E0516 22:42:52.220141    1828 network_create.go:104] error while trying to create docker network test-preload-20220516224147-2444 192.168.85.0/24: create docker network test-preload-20220516224147-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a7b6bff31ab014f8d3d8bfbf56656b6a666f8c9f40a0b59f5f7a1b87e727fde8 (br-a7b6bff31ab0): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:42:52.220141    1828 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network test-preload-20220516224147-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a7b6bff31ab014f8d3d8bfbf56656b6a666f8c9f40a0b59f5f7a1b87e727fde8 (br-a7b6bff31ab0): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network test-preload-20220516224147-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220516224147-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a7b6bff31ab014f8d3d8bfbf56656b6a666f8c9f40a0b59f5f7a1b87e727fde8 (br-a7b6bff31ab0): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:42:52.235276    1828 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:42:53.265861    1828 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0304527s)
	I0516 22:42:53.274619    1828 cli_runner.go:164] Run: docker volume create test-preload-20220516224147-2444 --label name.minikube.sigs.k8s.io=test-preload-20220516224147-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:42:54.315484    1828 cli_runner.go:211] docker volume create test-preload-20220516224147-2444 --label name.minikube.sigs.k8s.io=test-preload-20220516224147-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:42:54.315625    1828 cli_runner.go:217] Completed: docker volume create test-preload-20220516224147-2444 --label name.minikube.sigs.k8s.io=test-preload-20220516224147-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0407442s)
	I0516 22:42:54.315718    1828 client.go:171] LocalClient.Create took 6.319364s
	I0516 22:42:56.331561    1828 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:42:56.337653    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444
	W0516 22:42:57.359710    1828 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:57.359710    1828 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: (1.0220496s)
	I0516 22:42:57.359710    1828 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:57.709156    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444
	W0516 22:42:58.760803    1828 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:58.760803    1828 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: (1.0516386s)
	W0516 22:42:58.760803    1828 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	
	W0516 22:42:58.760803    1828 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:42:58.770054    1828 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:42:58.779263    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444
	W0516 22:42:59.816298    1828 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:42:59.816385    1828 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: (1.0369947s)
	I0516 22:42:59.816613    1828 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:43:00.054891    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444
	W0516 22:43:01.081821    1828 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:43:01.081821    1828 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: (1.0269218s)
	W0516 22:43:01.081821    1828 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	
	W0516 22:43:01.081821    1828 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:43:01.081821    1828 start.go:134] duration metric: createHost completed in 13.0895313s
	I0516 22:43:01.093423    1828 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:43:01.100435    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444
	W0516 22:43:02.115499    1828 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:43:02.115571    1828 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: (1.014924s)
	I0516 22:43:02.115742    1828 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:43:02.378303    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444
	W0516 22:43:03.414628    1828 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:43:03.414628    1828 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: (1.0352974s)
	W0516 22:43:03.414628    1828 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	
	W0516 22:43:03.414628    1828 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:43:03.426050    1828 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:43:03.433136    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444
	W0516 22:43:04.524484    1828 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:43:04.524484    1828 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: (1.0911494s)
	I0516 22:43:04.524484    1828 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:43:04.736605    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444
	W0516 22:43:05.774159    1828 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444 returned with exit code 1
	I0516 22:43:05.774159    1828 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: (1.0375467s)
	W0516 22:43:05.774159    1828 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	
	W0516 22:43:05.774159    1828 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220516224147-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220516224147-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444
	I0516 22:43:05.774159    1828 fix.go:57] fixHost completed within 45.0946802s
	I0516 22:43:05.774159    1828 start.go:81] releasing machines lock for "test-preload-20220516224147-2444", held for 45.0946802s
	W0516 22:43:05.775094    1828 out.go:239] * Failed to start docker container. Running "minikube delete -p test-preload-20220516224147-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for test-preload-20220516224147-2444 container: docker volume create test-preload-20220516224147-2444 --label name.minikube.sigs.k8s.io=test-preload-20220516224147-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220516224147-2444: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220516224147-2444': mkdir /var/lib/docker/volumes/test-preload-20220516224147-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p test-preload-20220516224147-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for test-preload-20220516224147-2444 container: docker volume create test-preload-20220516224147-2444 --label name.minikube.sigs.k8s.io=test-preload-20220516224147-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220516224147-2444: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220516224147-2444': mkdir /var/lib/docker/volumes/test-preload-20220516224147-2444: read-only file system
	
	I0516 22:43:05.780159    1828 out.go:177] 
	W0516 22:43:05.782720    1828 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for test-preload-20220516224147-2444 container: docker volume create test-preload-20220516224147-2444 --label name.minikube.sigs.k8s.io=test-preload-20220516224147-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220516224147-2444: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220516224147-2444': mkdir /var/lib/docker/volumes/test-preload-20220516224147-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for test-preload-20220516224147-2444 container: docker volume create test-preload-20220516224147-2444 --label name.minikube.sigs.k8s.io=test-preload-20220516224147-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220516224147-2444: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220516224147-2444': mkdir /var/lib/docker/volumes/test-preload-20220516224147-2444: read-only file system
	
	W0516 22:43:05.782872    1828 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 22:43:05.782912    1828 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 22:43:05.786382    1828 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:50: out/minikube-windows-amd64.exe start -p test-preload-20220516224147-2444 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0 failed: exit status 60
panic.go:482: *** TestPreload FAILED at 2022-05-16 22:43:05.9628109 +0000 GMT m=+2853.635401001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20220516224147-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect test-preload-20220516224147-2444: exit status 1 (1.1271536s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: test-preload-20220516224147-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-20220516224147-2444 -n test-preload-20220516224147-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-20220516224147-2444 -n test-preload-20220516224147-2444: exit status 7 (2.7733515s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:43:09.841610    8268 status.go:247] status error: host: state: unknown state "test-preload-20220516224147-2444": docker container inspect test-preload-20220516224147-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220516224147-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-20220516224147-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "test-preload-20220516224147-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-20220516224147-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-20220516224147-2444: (8.0157608s)
--- FAIL: TestPreload (90.12s)

                                                
                                    
x
+
TestScheduledStopWindows (89.38s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-20220516224317-2444 --memory=2048 --driver=docker
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p scheduled-stop-20220516224317-2444 --memory=2048 --driver=docker: exit status 60 (1m17.5466567s)

                                                
                                                
-- stdout --
	* [scheduled-stop-20220516224317-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node scheduled-stop-20220516224317-2444 in cluster scheduled-stop-20220516224317-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "scheduled-stop-20220516224317-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:43:35.628426    4472 network_create.go:104] error while trying to create docker network scheduled-stop-20220516224317-2444 192.168.76.0/24: create docker network scheduled-stop-20220516224317-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220516224317-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e212edf728962fc023120f76c5ef9f3a18411ba1211d2232eb53a50366de2db0 (br-e212edf72896): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network scheduled-stop-20220516224317-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220516224317-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e212edf728962fc023120f76c5ef9f3a18411ba1211d2232eb53a50366de2db0 (br-e212edf72896): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220516224317-2444 container: docker volume create scheduled-stop-20220516224317-2444 --label name.minikube.sigs.k8s.io=scheduled-stop-20220516224317-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220516224317-2444: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220516224317-2444': mkdir /var/lib/docker/volumes/scheduled-stop-20220516224317-2444: read-only file system
	
	E0516 22:44:21.970235    4472 network_create.go:104] error while trying to create docker network scheduled-stop-20220516224317-2444 192.168.85.0/24: create docker network scheduled-stop-20220516224317-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220516224317-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8508b24f8355ab6deaaa42ae28ebd365cd939e3402c38e4a0af314da0a408ce0 (br-8508b24f8355): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network scheduled-stop-20220516224317-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220516224317-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8508b24f8355ab6deaaa42ae28ebd365cd939e3402c38e4a0af314da0a408ce0 (br-8508b24f8355): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p scheduled-stop-20220516224317-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220516224317-2444 container: docker volume create scheduled-stop-20220516224317-2444 --label name.minikube.sigs.k8s.io=scheduled-stop-20220516224317-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220516224317-2444: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220516224317-2444': mkdir /var/lib/docker/volumes/scheduled-stop-20220516224317-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220516224317-2444 container: docker volume create scheduled-stop-20220516224317-2444 --label name.minikube.sigs.k8s.io=scheduled-stop-20220516224317-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220516224317-2444: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220516224317-2444': mkdir /var/lib/docker/volumes/scheduled-stop-20220516224317-2444: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 60

                                                
                                                
-- stdout --
	* [scheduled-stop-20220516224317-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node scheduled-stop-20220516224317-2444 in cluster scheduled-stop-20220516224317-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "scheduled-stop-20220516224317-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:43:35.628426    4472 network_create.go:104] error while trying to create docker network scheduled-stop-20220516224317-2444 192.168.76.0/24: create docker network scheduled-stop-20220516224317-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220516224317-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e212edf728962fc023120f76c5ef9f3a18411ba1211d2232eb53a50366de2db0 (br-e212edf72896): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network scheduled-stop-20220516224317-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220516224317-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e212edf728962fc023120f76c5ef9f3a18411ba1211d2232eb53a50366de2db0 (br-e212edf72896): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220516224317-2444 container: docker volume create scheduled-stop-20220516224317-2444 --label name.minikube.sigs.k8s.io=scheduled-stop-20220516224317-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220516224317-2444: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220516224317-2444': mkdir /var/lib/docker/volumes/scheduled-stop-20220516224317-2444: read-only file system
	
	E0516 22:44:21.970235    4472 network_create.go:104] error while trying to create docker network scheduled-stop-20220516224317-2444 192.168.85.0/24: create docker network scheduled-stop-20220516224317-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220516224317-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8508b24f8355ab6deaaa42ae28ebd365cd939e3402c38e4a0af314da0a408ce0 (br-8508b24f8355): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network scheduled-stop-20220516224317-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220516224317-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8508b24f8355ab6deaaa42ae28ebd365cd939e3402c38e4a0af314da0a408ce0 (br-8508b24f8355): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p scheduled-stop-20220516224317-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220516224317-2444 container: docker volume create scheduled-stop-20220516224317-2444 --label name.minikube.sigs.k8s.io=scheduled-stop-20220516224317-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220516224317-2444: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220516224317-2444': mkdir /var/lib/docker/volumes/scheduled-stop-20220516224317-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220516224317-2444 container: docker volume create scheduled-stop-20220516224317-2444 --label name.minikube.sigs.k8s.io=scheduled-stop-20220516224317-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220516224317-2444: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220516224317-2444': mkdir /var/lib/docker/volumes/scheduled-stop-20220516224317-2444: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
panic.go:482: *** TestScheduledStopWindows FAILED at 2022-05-16 22:44:35.4374099 +0000 GMT m=+2943.109322001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopWindows]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-20220516224317-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect scheduled-stop-20220516224317-2444: exit status 1 (1.1000187s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: scheduled-stop-20220516224317-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220516224317-2444 -n scheduled-stop-20220516224317-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220516224317-2444 -n scheduled-stop-20220516224317-2444: exit status 7 (2.7894093s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:44:39.305963    8212 status.go:247] status error: host: state: unknown state "scheduled-stop-20220516224317-2444": docker container inspect scheduled-stop-20220516224317-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: scheduled-stop-20220516224317-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-20220516224317-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-20220516224317-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-20220516224317-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-20220516224317-2444: (7.9298243s)
--- FAIL: TestScheduledStopWindows (89.38s)

                                                
                                    
x
+
TestSkaffold (90.82s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:56: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\skaffold.exe2388911623 version
skaffold_test.go:60: skaffold version: v1.38.0
skaffold_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p skaffold-20220516224447-2444 --memory=2600 --driver=docker
skaffold_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p skaffold-20220516224447-2444 --memory=2600 --driver=docker: exit status 60 (1m18.0052087s)

                                                
                                                
-- stdout --
	* [skaffold-20220516224447-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node skaffold-20220516224447-2444 in cluster skaffold-20220516224447-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	* docker "skaffold-20220516224447-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:45:06.311027    3360 network_create.go:104] error while trying to create docker network skaffold-20220516224447-2444 192.168.76.0/24: create docker network skaffold-20220516224447-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true skaffold-20220516224447-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8ecd2b62ee1709c32bbb126454f123d546db38ae09430077c8ae0b1329728ba4 (br-8ecd2b62ee17): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network skaffold-20220516224447-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true skaffold-20220516224447-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8ecd2b62ee1709c32bbb126454f123d546db38ae09430077c8ae0b1329728ba4 (br-8ecd2b62ee17): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for skaffold-20220516224447-2444 container: docker volume create skaffold-20220516224447-2444 --label name.minikube.sigs.k8s.io=skaffold-20220516224447-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create skaffold-20220516224447-2444: error while creating volume root path '/var/lib/docker/volumes/skaffold-20220516224447-2444': mkdir /var/lib/docker/volumes/skaffold-20220516224447-2444: read-only file system
	
	E0516 22:45:52.666754    3360 network_create.go:104] error while trying to create docker network skaffold-20220516224447-2444 192.168.85.0/24: create docker network skaffold-20220516224447-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true skaffold-20220516224447-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cdbb6f1ed50d8a443f51704fbc9df3f9ab51613010cc9779fe4a5fb78ffc37c4 (br-cdbb6f1ed50d): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network skaffold-20220516224447-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true skaffold-20220516224447-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cdbb6f1ed50d8a443f51704fbc9df3f9ab51613010cc9779fe4a5fb78ffc37c4 (br-cdbb6f1ed50d): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p skaffold-20220516224447-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for skaffold-20220516224447-2444 container: docker volume create skaffold-20220516224447-2444 --label name.minikube.sigs.k8s.io=skaffold-20220516224447-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create skaffold-20220516224447-2444: error while creating volume root path '/var/lib/docker/volumes/skaffold-20220516224447-2444': mkdir /var/lib/docker/volumes/skaffold-20220516224447-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for skaffold-20220516224447-2444 container: docker volume create skaffold-20220516224447-2444 --label name.minikube.sigs.k8s.io=skaffold-20220516224447-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create skaffold-20220516224447-2444: error while creating volume root path '/var/lib/docker/volumes/skaffold-20220516224447-2444': mkdir /var/lib/docker/volumes/skaffold-20220516224447-2444: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
skaffold_test.go:65: starting minikube: exit status 60

                                                
                                                
-- stdout --
	* [skaffold-20220516224447-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node skaffold-20220516224447-2444 in cluster skaffold-20220516224447-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	* docker "skaffold-20220516224447-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:45:06.311027    3360 network_create.go:104] error while trying to create docker network skaffold-20220516224447-2444 192.168.76.0/24: create docker network skaffold-20220516224447-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true skaffold-20220516224447-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8ecd2b62ee1709c32bbb126454f123d546db38ae09430077c8ae0b1329728ba4 (br-8ecd2b62ee17): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network skaffold-20220516224447-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true skaffold-20220516224447-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8ecd2b62ee1709c32bbb126454f123d546db38ae09430077c8ae0b1329728ba4 (br-8ecd2b62ee17): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for skaffold-20220516224447-2444 container: docker volume create skaffold-20220516224447-2444 --label name.minikube.sigs.k8s.io=skaffold-20220516224447-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create skaffold-20220516224447-2444: error while creating volume root path '/var/lib/docker/volumes/skaffold-20220516224447-2444': mkdir /var/lib/docker/volumes/skaffold-20220516224447-2444: read-only file system
	
	E0516 22:45:52.666754    3360 network_create.go:104] error while trying to create docker network skaffold-20220516224447-2444 192.168.85.0/24: create docker network skaffold-20220516224447-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true skaffold-20220516224447-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cdbb6f1ed50d8a443f51704fbc9df3f9ab51613010cc9779fe4a5fb78ffc37c4 (br-cdbb6f1ed50d): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network skaffold-20220516224447-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true skaffold-20220516224447-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cdbb6f1ed50d8a443f51704fbc9df3f9ab51613010cc9779fe4a5fb78ffc37c4 (br-cdbb6f1ed50d): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p skaffold-20220516224447-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for skaffold-20220516224447-2444 container: docker volume create skaffold-20220516224447-2444 --label name.minikube.sigs.k8s.io=skaffold-20220516224447-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create skaffold-20220516224447-2444: error while creating volume root path '/var/lib/docker/volumes/skaffold-20220516224447-2444': mkdir /var/lib/docker/volumes/skaffold-20220516224447-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for skaffold-20220516224447-2444 container: docker volume create skaffold-20220516224447-2444 --label name.minikube.sigs.k8s.io=skaffold-20220516224447-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create skaffold-20220516224447-2444: error while creating volume root path '/var/lib/docker/volumes/skaffold-20220516224447-2444': mkdir /var/lib/docker/volumes/skaffold-20220516224447-2444: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
panic.go:482: *** TestSkaffold FAILED at 2022-05-16 22:46:06.2277776 +0000 GMT m=+3033.898994901
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-20220516224447-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect skaffold-20220516224447-2444: exit status 1 (1.0808018s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: skaffold-20220516224447-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p skaffold-20220516224447-2444 -n skaffold-20220516224447-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p skaffold-20220516224447-2444 -n skaffold-20220516224447-2444: exit status 7 (2.8178771s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:46:10.104309    2788 status.go:247] status error: host: state: unknown state "skaffold-20220516224447-2444": docker container inspect skaffold-20220516224447-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: skaffold-20220516224447-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-20220516224447-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-20220516224447-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p skaffold-20220516224447-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p skaffold-20220516224447-2444: (7.9562228s)
--- FAIL: TestSkaffold (90.82s)

                                                
                                    
x
+
TestInsufficientStorage (32.55s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-20220516224618-2444 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-20220516224618-2444 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (21.8228929s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"be28e1a1-2300-4dcf-9c8c-4e0c61bc1b26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220516224618-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9e40af5-b391-43e6-9e60-28c5ea82326c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"3264586b-f778-429d-9bc7-9c712bbe586a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"4fc76bd3-e51a-440e-a596-cd0707bb6252","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"6b9de21c-6c5a-4f56-b2b1-372e61ad3c12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c1c50cf7-0eba-4d37-b743-356051bfa7f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b626568d-0fde-458f-a5bf-2af76891d0cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"83f11ab8-785a-49ed-bc11-07e1838449de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"792e96b3-5816-4cd2-a29a-6e73d6eda6df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with the root privilege"}}
	{"specversion":"1.0","id":"a063762b-ff99-43d7-b561-b512a6b4cddc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220516224618-2444 in cluster insufficient-storage-20220516224618-2444","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc73156c-f0a7-46e7-859b-e71be03f5f4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"52d13ffb-c2bd-44c3-a9e9-49061949175a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae12af31-0b10-4ca3-a097-6ea13ca23156","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network insufficient-storage-20220516224618-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true insufficient-storage-20220516224618-2444: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 90a78892fddcca101904795b64d29505f4c3e9a0b510588d0c8663399e3706f2 (br-90a78892fddc): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4"}}
	{"specversion":"1.0","id":"9025ee25-f58e-4fe2-bd9b-081f393abb94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:46:35.690908    8332 network_create.go:104] error while trying to create docker network insufficient-storage-20220516224618-2444 192.168.76.0/24: create docker network insufficient-storage-20220516224618-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true insufficient-storage-20220516224618-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 90a78892fddcca101904795b64d29505f4c3e9a0b510588d0c8663399e3706f2 (br-90a78892fddc): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220516224618-2444 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220516224618-2444 --output=json --layout=cluster: exit status 7 (2.7480546s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220516224618-2444","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":520,"StatusName":"Unknown"}},"Nodes":[{"Name":"insufficient-storage-20220516224618-2444","StatusCode":520,"StatusName":"Unknown","Components":{"apiserver":{"Name":"apiserver","StatusCode":520,"StatusName":"Unknown"},"kubelet":{"Name":"kubelet","StatusCode":520,"StatusName":"Unknown"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:46:42.635303    8124 status.go:258] status error: host: state: unknown state "insufficient-storage-20220516224618-2444": docker container inspect insufficient-storage-20220516224618-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: insufficient-storage-20220516224618-2444
	E0516 22:46:42.635303    8124 status.go:261] The "insufficient-storage-20220516224618-2444" host does not exist!

                                                
                                                
** /stderr **
status_test.go:98: incorrect node status code: 507
helpers_test.go:175: Cleaning up "insufficient-storage-20220516224618-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-20220516224618-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-20220516224618-2444: (7.9721969s)
--- FAIL: TestInsufficientStorage (32.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (373.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.3615456764.exe start -p running-upgrade-20220516224826-2444 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.3615456764.exe start -p running-upgrade-20220516224826-2444 --memory=2200 --vm-driver=docker: exit status 70 (2m7.7767038s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220516224826-2444] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig3876561460
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: creating host: create: creating: create kic node: creating volume for running-upgrade-20220516224826-2444 container: output Error response from daemon: create running-upgrade-20220516224826-2444: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220516224826-2444': mkdir /var/lib/docker/volumes/running-upgrade-20220516224826-2444: read-only file system
	: exit status 1
	* docker "running-upgrade-20220516224826-2444" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220516224826-2444 container: output Error response from daemon: create running-upgrade-20220516224826-2444: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220516224826-2444': mkdir /var/lib/docker/volumes/running-upgrade-20220516224826-2444: read-only file system
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20220516224826-2444", then "minikube start -p running-upgrade-20220516224826-2444 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220516224826-2444 container: output Error response from daemon: create running-upgrade-20220516224826-2444: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220516224826-2444': mkdir /var/lib/docker/volumes/running-upgrade-20220516224826-2444: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.3615456764.exe start -p running-upgrade-20220516224826-2444 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.3615456764.exe start -p running-upgrade-20220516224826-2444 --memory=2200 --vm-driver=docker: exit status 70 (2m38.9313852s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220516224826-2444] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig2417544437
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "running-upgrade-20220516224826-2444" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220516224826-2444 container: output Error response from daemon: create running-upgrade-20220516224826-2444: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220516224826-2444': mkdir /var/lib/docker/volumes/running-upgrade-20220516224826-2444: read-only file system
	: exit status 1
	* docker "running-upgrade-20220516224826-2444" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220516224826-2444 container: output Error response from daemon: create running-upgrade-20220516224826-2444: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220516224826-2444': mkdir /var/lib/docker/volumes/running-upgrade-20220516224826-2444: read-only file system
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20220516224826-2444", then "minikube start -p running-upgrade-20220516224826-2444 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 12.34 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 51.61 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 91.56 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 118.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 163.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 190.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 224.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 267.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 312.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 357.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 403.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 447.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 487.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 516.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220516224826-2444 container: output Error response from daemon: create running-upgrade-20220516224826-2444: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220516224826-2444': mkdir /var/lib/docker/volumes/running-upgrade-20220516224826-2444: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.3615456764.exe start -p running-upgrade-20220516224826-2444 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.3615456764.exe start -p running-upgrade-20220516224826-2444 --memory=2200 --vm-driver=docker: exit status 70 (1m10.2621455s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220516224826-2444] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig2721422855
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "running-upgrade-20220516224826-2444" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220516224826-2444 container: output Error response from daemon: create running-upgrade-20220516224826-2444: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220516224826-2444': mkdir /var/lib/docker/volumes/running-upgrade-20220516224826-2444: read-only file system
	: exit status 1
	* docker "running-upgrade-20220516224826-2444" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220516224826-2444 container: output Error response from daemon: create running-upgrade-20220516224826-2444: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220516224826-2444': mkdir /var/lib/docker/volumes/running-upgrade-20220516224826-2444: read-only file system
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20220516224826-2444", then "minikube start -p running-upgrade-20220516224826-2444 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.48 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 51.62 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 80.75 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 127.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 163.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 195.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 234.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 282.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 320.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 362.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 404.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 450.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 491.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 537.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220516224826-2444 container: output Error response from daemon: create running-upgrade-20220516224826-2444: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220516224826-2444': mkdir /var/lib/docker/volumes/running-upgrade-20220516224826-2444: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:482: *** TestRunningBinaryUpgrade FAILED at 2022-05-16 22:54:27.2352742 +0000 GMT m=+3534.902462501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20220516224826-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect running-upgrade-20220516224826-2444: exit status 1 (1.1004308s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: running-upgrade-20220516224826-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-20220516224826-2444 -n running-upgrade-20220516224826-2444

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-20220516224826-2444 -n running-upgrade-20220516224826-2444: exit status 7 (2.9980228s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:54:31.311769    7776 status.go:247] status error: host: state: unknown state "running-upgrade-20220516224826-2444": docker container inspect running-upgrade-20220516224826-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: running-upgrade-20220516224826-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "running-upgrade-20220516224826-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "running-upgrade-20220516224826-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-20220516224826-2444

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-20220516224826-2444: (8.6566188s)
--- FAIL: TestRunningBinaryUpgrade (373.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (116.81s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220516225336-2444 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220516225336-2444 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: exit status 60 (1m21.276928s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220516225336-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kubernetes-upgrade-20220516225336-2444 in cluster kubernetes-upgrade-20220516225336-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "kubernetes-upgrade-20220516225336-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:53:37.085530    1480 out.go:296] Setting OutFile to fd 1824 ...
	I0516 22:53:37.154410    1480 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:53:37.154410    1480 out.go:309] Setting ErrFile to fd 1752...
	I0516 22:53:37.154410    1480 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:53:37.170406    1480 out.go:303] Setting JSON to false
	I0516 22:53:37.172403    1480 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4729,"bootTime":1652736888,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:53:37.172403    1480 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:53:37.178415    1480 out.go:177] * [kubernetes-upgrade-20220516225336-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:53:37.181404    1480 notify.go:193] Checking for updates...
	I0516 22:53:37.183409    1480 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:53:37.185405    1480 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:53:37.188408    1480 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:53:37.192408    1480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:53:37.198410    1480 config.go:178] Loaded profile config "force-systemd-env-20220516225309-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:53:37.198410    1480 config.go:178] Loaded profile config "force-systemd-flag-20220516225238-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:53:37.199406    1480 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:53:37.199406    1480 config.go:178] Loaded profile config "running-upgrade-20220516224826-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0516 22:53:37.199406    1480 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:53:39.954991    1480 docker.go:137] docker version: linux-20.10.14
	I0516 22:53:39.964397    1480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:53:42.066330    1480 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.101732s)
	I0516 22:53:42.066579    1480 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:53:40.9924981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:53:42.072898    1480 out.go:177] * Using the docker driver based on user configuration
	I0516 22:53:42.075099    1480 start.go:284] selected driver: docker
	I0516 22:53:42.075099    1480 start.go:806] validating driver "docker" against <nil>
	I0516 22:53:42.075099    1480 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:53:42.146011    1480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:53:44.250638    1480 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1044349s)
	I0516 22:53:44.250704    1480 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:53:43.1634257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:53:44.250704    1480 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 22:53:44.251924    1480 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0516 22:53:44.256817    1480 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 22:53:44.258212    1480 cni.go:95] Creating CNI manager for ""
	I0516 22:53:44.258212    1480 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 22:53:44.259169    1480 start_flags.go:306] config:
	{Name:kubernetes-upgrade-20220516225336-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220516225336-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:53:44.261034    1480 out.go:177] * Starting control plane node kubernetes-upgrade-20220516225336-2444 in cluster kubernetes-upgrade-20220516225336-2444
	I0516 22:53:44.265269    1480 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:53:44.267374    1480 out.go:177] * Pulling base image ...
	I0516 22:53:44.270621    1480 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0516 22:53:44.270621    1480 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:53:44.270621    1480 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0516 22:53:44.270621    1480 cache.go:57] Caching tarball of preloaded images
	I0516 22:53:44.270621    1480 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 22:53:44.271904    1480 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0516 22:53:44.271904    1480 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubernetes-upgrade-20220516225336-2444\config.json ...
	I0516 22:53:44.271904    1480 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubernetes-upgrade-20220516225336-2444\config.json: {Name:mkf8c58fd04e55a96db60c107221ee4022029c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 22:53:45.378338    1480 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:53:45.378502    1480 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:53:45.378502    1480 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:53:45.378502    1480 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:53:45.379119    1480 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:53:45.379119    1480 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:53:45.379119    1480 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:53:45.379119    1480 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:53:45.379119    1480 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:53:47.769680    1480 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:53:47.769680    1480 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:53:47.769680    1480 start.go:352] acquiring machines lock for kubernetes-upgrade-20220516225336-2444: {Name:mka3724ddf4a497f837518183e7743155758ec50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:53:47.770260    1480 start.go:356] acquired machines lock for "kubernetes-upgrade-20220516225336-2444" in 579.5µs
	I0516 22:53:47.770395    1480 start.go:91] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220516225336-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220516225336-2444 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8
443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 22:53:47.770395    1480 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:53:47.776524    1480 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:53:47.776524    1480 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220516225336-2444" (driver="docker")
	I0516 22:53:47.776524    1480 client.go:168] LocalClient.Create starting
	I0516 22:53:47.777400    1480 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:53:47.777400    1480 main.go:134] libmachine: Decoding PEM data...
	I0516 22:53:47.777400    1480 main.go:134] libmachine: Parsing certificate...
	I0516 22:53:47.777400    1480 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:53:47.778023    1480 main.go:134] libmachine: Decoding PEM data...
	I0516 22:53:47.778023    1480 main.go:134] libmachine: Parsing certificate...
	I0516 22:53:47.788744    1480 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220516225336-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:53:48.889787    1480 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220516225336-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:53:48.889787    1480 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220516225336-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1009092s)
	I0516 22:53:48.897801    1480 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220516225336-2444] to gather additional debugging logs...
	I0516 22:53:48.897801    1480 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220516225336-2444
	W0516 22:53:49.975683    1480 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:53:49.975683    1480 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220516225336-2444: (1.0777617s)
	I0516 22:53:49.975683    1480 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220516225336-2444]: docker network inspect kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220516225336-2444
	I0516 22:53:49.975683    1480 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220516225336-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220516225336-2444
	
	** /stderr **
	I0516 22:53:49.983458    1480 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:53:51.051943    1480 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0684762s)
	I0516 22:53:51.072991    1480 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006980] misses:0}
	I0516 22:53:51.072991    1480 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:51.073570    1480 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220516225336-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:53:51.081704    1480 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444
	W0516 22:53:52.160268    1480 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:53:52.165252    1480 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444: (1.0784948s)
	W0516 22:53:52.165480    1480 network_create.go:107] failed to create docker network kubernetes-upgrade-20220516225336-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:53:52.187421    1480 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006980] amended:false}} dirty:map[] misses:0}
	I0516 22:53:52.187860    1480 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:52.207245    1480 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006980] amended:true}} dirty:map[192.168.49.0:0xc000006980 192.168.58.0:0xc00058c190] misses:0}
	I0516 22:53:52.207245    1480 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:52.207245    1480 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220516225336-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:53:52.217646    1480 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444
	W0516 22:53:53.319997    1480 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:53:53.319997    1480 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444: (1.1023418s)
	W0516 22:53:53.319997    1480 network_create.go:107] failed to create docker network kubernetes-upgrade-20220516225336-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:53:53.338400    1480 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006980] amended:true}} dirty:map[192.168.49.0:0xc000006980 192.168.58.0:0xc00058c190] misses:1}
	I0516 22:53:53.338934    1480 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:53.357761    1480 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006980] amended:true}} dirty:map[192.168.49.0:0xc000006980 192.168.58.0:0xc00058c190 192.168.67.0:0xc00058c248] misses:1}
	I0516 22:53:53.358156    1480 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:53.358156    1480 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220516225336-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:53:53.366401    1480 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444
	W0516 22:53:54.456410    1480 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:53:54.456604    1480 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444: (1.0898971s)
	W0516 22:53:54.456654    1480 network_create.go:107] failed to create docker network kubernetes-upgrade-20220516225336-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:53:54.476798    1480 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006980] amended:true}} dirty:map[192.168.49.0:0xc000006980 192.168.58.0:0xc00058c190 192.168.67.0:0xc00058c248] misses:2}
	I0516 22:53:54.476798    1480 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:54.495863    1480 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006980] amended:true}} dirty:map[192.168.49.0:0xc000006980 192.168.58.0:0xc00058c190 192.168.67.0:0xc00058c248 192.168.76.0:0xc000484710] misses:2}
	I0516 22:53:54.495863    1480 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:53:54.495863    1480 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220516225336-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:53:54.505399    1480 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444
	W0516 22:53:55.588092    1480 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:53:55.588092    1480 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444: (1.0826848s)
	E0516 22:53:55.588092    1480 network_create.go:104] error while trying to create docker network kubernetes-upgrade-20220516225336-2444 192.168.76.0/24: create docker network kubernetes-upgrade-20220516225336-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 729af9d709e766f8f9641f8551fc221177a1e7ef8cd5426fe870296e927830e1 (br-729af9d709e7): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:53:55.588092    1480 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220516225336-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 729af9d709e766f8f9641f8551fc221177a1e7ef8cd5426fe870296e927830e1 (br-729af9d709e7): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220516225336-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 729af9d709e766f8f9641f8551fc221177a1e7ef8cd5426fe870296e927830e1 (br-729af9d709e7): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:53:55.606108    1480 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:53:56.693855    1480 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0877381s)
	I0516 22:53:56.701856    1480 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220516225336-2444 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220516225336-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:53:57.793231    1480 cli_runner.go:211] docker volume create kubernetes-upgrade-20220516225336-2444 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220516225336-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:53:57.793303    1480 cli_runner.go:217] Completed: docker volume create kubernetes-upgrade-20220516225336-2444 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220516225336-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0912682s)
	I0516 22:53:57.793303    1480 client.go:171] LocalClient.Create took 10.0166956s
	I0516 22:53:59.820490    1480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:53:59.828606    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444
	W0516 22:54:00.895467    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:00.895679    1480 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: (1.0667045s)
	I0516 22:54:00.895679    1480 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:01.185826    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444
	W0516 22:54:02.260900    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:02.260900    1480 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: (1.0749311s)
	W0516 22:54:02.260900    1480 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	
	W0516 22:54:02.260900    1480 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:02.272522    1480 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:54:02.280369    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444
	W0516 22:54:03.373268    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:03.373268    1480 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: (1.0928894s)
	I0516 22:54:03.373268    1480 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:03.675665    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444
	W0516 22:54:04.752337    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:04.752337    1480 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: (1.0764646s)
	W0516 22:54:04.752337    1480 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	
	W0516 22:54:04.752337    1480 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:04.752337    1480 start.go:134] duration metric: createHost completed in 16.9818017s
	I0516 22:54:04.752337    1480 start.go:81] releasing machines lock for "kubernetes-upgrade-20220516225336-2444", held for 16.981869s
	W0516 22:54:04.752890    1480 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220516225336-2444 container: docker volume create kubernetes-upgrade-20220516225336-2444 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220516225336-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220516225336-2444: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220516225336-2444': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220516225336-2444: read-only file system
	I0516 22:54:04.768861    1480 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}
	W0516 22:54:05.822129    1480 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:05.822129    1480 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: (1.0532597s)
	I0516 22:54:05.822129    1480 delete.go:82] Unable to get host status for kubernetes-upgrade-20220516225336-2444, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	W0516 22:54:05.822129    1480 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220516225336-2444 container: docker volume create kubernetes-upgrade-20220516225336-2444 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220516225336-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220516225336-2444: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220516225336-2444': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220516225336-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220516225336-2444 container: docker volume create kubernetes-upgrade-20220516225336-2444 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220516225336-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220516225336-2444: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220516225336-2444': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220516225336-2444: read-only file system
	
	I0516 22:54:05.822129    1480 start.go:623] Will try again in 5 seconds ...
	I0516 22:54:10.836143    1480 start.go:352] acquiring machines lock for kubernetes-upgrade-20220516225336-2444: {Name:mka3724ddf4a497f837518183e7743155758ec50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:54:10.836364    1480 start.go:356] acquired machines lock for "kubernetes-upgrade-20220516225336-2444" in 0s
	I0516 22:54:10.836364    1480 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:54:10.836364    1480 fix.go:55] fixHost starting: 
	I0516 22:54:10.864737    1480 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}
	W0516 22:54:11.944040    1480 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:11.944097    1480 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: (1.0790424s)
	I0516 22:54:11.944245    1480 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220516225336-2444: state= err=unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:11.944269    1480 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:54:11.947829    1480 out.go:177] * docker "kubernetes-upgrade-20220516225336-2444" container is missing, will recreate.
	I0516 22:54:11.949974    1480 delete.go:124] DEMOLISHING kubernetes-upgrade-20220516225336-2444 ...
	I0516 22:54:11.968739    1480 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}
	W0516 22:54:13.063603    1480 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:13.063657    1480 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: (1.0946188s)
	W0516 22:54:13.063657    1480 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:13.063657    1480 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:13.079161    1480 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}
	W0516 22:54:14.156483    1480 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:14.156483    1480 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: (1.077313s)
	I0516 22:54:14.156483    1480 delete.go:82] Unable to get host status for kubernetes-upgrade-20220516225336-2444, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:14.165463    1480 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20220516225336-2444
	W0516 22:54:15.286162    1480 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:15.286162    1480 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kubernetes-upgrade-20220516225336-2444: (1.1206894s)
	I0516 22:54:15.286162    1480 kic.go:356] could not find the container kubernetes-upgrade-20220516225336-2444 to remove it. will try anyways
	I0516 22:54:15.296155    1480 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}
	W0516 22:54:16.384674    1480 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:16.384674    1480 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: (1.0885095s)
	W0516 22:54:16.384674    1480 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:16.392689    1480 cli_runner.go:164] Run: docker exec --privileged -t kubernetes-upgrade-20220516225336-2444 /bin/bash -c "sudo init 0"
	W0516 22:54:17.574522    1480 cli_runner.go:211] docker exec --privileged -t kubernetes-upgrade-20220516225336-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:54:17.574522    1480 cli_runner.go:217] Completed: docker exec --privileged -t kubernetes-upgrade-20220516225336-2444 /bin/bash -c "sudo init 0": (1.1818236s)
	I0516 22:54:17.574522    1480 oci.go:641] error shutdown kubernetes-upgrade-20220516225336-2444: docker exec --privileged -t kubernetes-upgrade-20220516225336-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:18.583836    1480 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}
	W0516 22:54:19.671440    1480 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:19.671574    1480 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: (1.0873555s)
	I0516 22:54:19.671691    1480 oci.go:653] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:19.671741    1480 oci.go:655] temporary error: container kubernetes-upgrade-20220516225336-2444 status is  but expect it to be exited
	I0516 22:54:19.671793    1480 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:20.149772    1480 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}
	W0516 22:54:21.244747    1480 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:21.244747    1480 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: (1.0949654s)
	I0516 22:54:21.244747    1480 oci.go:653] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:21.244747    1480 oci.go:655] temporary error: container kubernetes-upgrade-20220516225336-2444 status is  but expect it to be exited
	I0516 22:54:21.244747    1480 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:22.159981    1480 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}
	W0516 22:54:23.288009    1480 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:23.288009    1480 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: (1.1280184s)
	I0516 22:54:23.288009    1480 oci.go:653] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:23.288009    1480 oci.go:655] temporary error: container kubernetes-upgrade-20220516225336-2444 status is  but expect it to be exited
	I0516 22:54:23.288009    1480 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:23.945241    1480 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}
	W0516 22:54:25.043188    1480 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:25.043188    1480 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: (1.0979377s)
	I0516 22:54:25.043188    1480 oci.go:653] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:25.043188    1480 oci.go:655] temporary error: container kubernetes-upgrade-20220516225336-2444 status is  but expect it to be exited
	I0516 22:54:25.043188    1480 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:26.167706    1480 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}
	W0516 22:54:27.238256    1480 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:27.238256    1480 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: (1.0705412s)
	I0516 22:54:27.238256    1480 oci.go:653] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:27.238256    1480 oci.go:655] temporary error: container kubernetes-upgrade-20220516225336-2444 status is  but expect it to be exited
	I0516 22:54:27.238256    1480 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:28.767732    1480 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}
	W0516 22:54:29.899632    1480 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:29.899632    1480 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: (1.1318912s)
	I0516 22:54:29.899632    1480 oci.go:653] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:29.899632    1480 oci.go:655] temporary error: container kubernetes-upgrade-20220516225336-2444 status is  but expect it to be exited
	I0516 22:54:29.899632    1480 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:32.959943    1480 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}
	W0516 22:54:34.054177    1480 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:54:34.054318    1480 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: (1.0940928s)
	I0516 22:54:34.054318    1480 oci.go:653] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:34.054318    1480 oci.go:655] temporary error: container kubernetes-upgrade-20220516225336-2444 status is  but expect it to be exited
	I0516 22:54:34.054318    1480 oci.go:88] couldn't shut down kubernetes-upgrade-20220516225336-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	 
	I0516 22:54:34.062245    1480 cli_runner.go:164] Run: docker rm -f -v kubernetes-upgrade-20220516225336-2444
	I0516 22:54:35.194008    1480 cli_runner.go:217] Completed: docker rm -f -v kubernetes-upgrade-20220516225336-2444: (1.1317532s)
	I0516 22:54:35.202006    1480 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20220516225336-2444
	W0516 22:54:36.310171    1480 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:36.310234    1480 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kubernetes-upgrade-20220516225336-2444: (1.1081232s)
	I0516 22:54:36.317966    1480 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220516225336-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:54:37.455116    1480 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220516225336-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:54:37.455116    1480 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220516225336-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.137117s)
	I0516 22:54:37.464277    1480 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220516225336-2444] to gather additional debugging logs...
	I0516 22:54:37.464277    1480 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220516225336-2444
	W0516 22:54:38.542379    1480 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:38.542379    1480 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220516225336-2444: (1.0780931s)
	I0516 22:54:38.542379    1480 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220516225336-2444]: docker network inspect kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:38.542379    1480 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220516225336-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220516225336-2444
	
	** /stderr **
	W0516 22:54:38.543536    1480 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:54:38.543576    1480 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:54:39.558756    1480 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:54:39.562601    1480 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:54:39.562601    1480 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220516225336-2444" (driver="docker")
	I0516 22:54:39.562601    1480 client.go:168] LocalClient.Create starting
	I0516 22:54:39.563187    1480 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:54:39.563784    1480 main.go:134] libmachine: Decoding PEM data...
	I0516 22:54:39.563784    1480 main.go:134] libmachine: Parsing certificate...
	I0516 22:54:39.563784    1480 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:54:39.564361    1480 main.go:134] libmachine: Decoding PEM data...
	I0516 22:54:39.564361    1480 main.go:134] libmachine: Parsing certificate...
	I0516 22:54:39.575943    1480 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220516225336-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:54:40.646988    1480 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220516225336-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:54:40.646988    1480 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220516225336-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0710356s)
	I0516 22:54:40.653988    1480 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220516225336-2444] to gather additional debugging logs...
	I0516 22:54:40.653988    1480 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220516225336-2444
	W0516 22:54:41.743549    1480 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:41.743549    1480 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220516225336-2444: (1.0895513s)
	I0516 22:54:41.743549    1480 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220516225336-2444]: docker network inspect kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:41.743549    1480 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220516225336-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220516225336-2444
	
	** /stderr **
	I0516 22:54:41.751554    1480 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:54:42.852085    1480 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1005225s)
	I0516 22:54:42.869090    1480 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006980] amended:true}} dirty:map[192.168.49.0:0xc000006980 192.168.58.0:0xc00058c190 192.168.67.0:0xc00058c248 192.168.76.0:0xc000484710] misses:2}
	I0516 22:54:42.869090    1480 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:42.883086    1480 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006980] amended:true}} dirty:map[192.168.49.0:0xc000006980 192.168.58.0:0xc00058c190 192.168.67.0:0xc00058c248 192.168.76.0:0xc000484710] misses:3}
	I0516 22:54:42.883086    1480 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:42.904940    1480 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006980 192.168.58.0:0xc00058c190 192.168.67.0:0xc00058c248 192.168.76.0:0xc000484710] amended:false}} dirty:map[] misses:0}
	I0516 22:54:42.904940    1480 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:42.920832    1480 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006980 192.168.58.0:0xc00058c190 192.168.67.0:0xc00058c248 192.168.76.0:0xc000484710] amended:false}} dirty:map[] misses:0}
	I0516 22:54:42.920832    1480 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:42.935832    1480 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006980 192.168.58.0:0xc00058c190 192.168.67.0:0xc00058c248 192.168.76.0:0xc000484710] amended:true}} dirty:map[192.168.49.0:0xc000006980 192.168.58.0:0xc00058c190 192.168.67.0:0xc00058c248 192.168.76.0:0xc000484710 192.168.85.0:0xc0000070b0] misses:0}
	I0516 22:54:42.935832    1480 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:54:42.935832    1480 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220516225336-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:54:42.942844    1480 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444
	W0516 22:54:44.046326    1480 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:44.046326    1480 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444: (1.1034726s)
	E0516 22:54:44.046326    1480 network_create.go:104] error while trying to create docker network kubernetes-upgrade-20220516225336-2444 192.168.85.0/24: create docker network kubernetes-upgrade-20220516225336-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2695c7ee54c67fbeeb2a0f22f1a903f011185ea4454ca781fda339c042a0fec2 (br-2695c7ee54c6): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:54:44.046326    1480 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220516225336-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2695c7ee54c67fbeeb2a0f22f1a903f011185ea4454ca781fda339c042a0fec2 (br-2695c7ee54c6): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220516225336-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2695c7ee54c67fbeeb2a0f22f1a903f011185ea4454ca781fda339c042a0fec2 (br-2695c7ee54c6): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:54:44.063322    1480 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:54:45.162226    1480 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0988944s)
	I0516 22:54:45.174224    1480 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220516225336-2444 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220516225336-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:54:46.222361    1480 cli_runner.go:211] docker volume create kubernetes-upgrade-20220516225336-2444 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220516225336-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:54:46.222361    1480 cli_runner.go:217] Completed: docker volume create kubernetes-upgrade-20220516225336-2444 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220516225336-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0481281s)
	I0516 22:54:46.222361    1480 client.go:171] LocalClient.Create took 6.6597036s
	I0516 22:54:48.242780    1480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:54:48.249705    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444
	W0516 22:54:49.346725    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:49.346725    1480 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: (1.0970115s)
	I0516 22:54:49.346725    1480 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:49.689001    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444
	W0516 22:54:50.785206    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:50.785206    1480 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: (1.0961958s)
	W0516 22:54:50.785206    1480 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	
	W0516 22:54:50.785206    1480 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:50.795221    1480 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:54:50.803213    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444
	W0516 22:54:51.846940    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:51.846940    1480 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: (1.043719s)
	I0516 22:54:51.846940    1480 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:52.076950    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444
	W0516 22:54:53.152951    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:53.152951    1480 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: (1.0758192s)
	W0516 22:54:53.152951    1480 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	
	W0516 22:54:53.152951    1480 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:53.152951    1480 start.go:134] duration metric: createHost completed in 13.5940806s
	I0516 22:54:53.169989    1480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:54:53.177948    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444
	W0516 22:54:54.270018    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:54.270018    1480 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: (1.0920606s)
	I0516 22:54:54.270018    1480 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:54.527443    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444
	W0516 22:54:55.640204    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:55.640204    1480 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: (1.1127522s)
	W0516 22:54:55.640204    1480 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	
	W0516 22:54:55.640204    1480 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:55.651203    1480 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:54:55.658203    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444
	W0516 22:54:56.752520    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:56.752520    1480 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: (1.0943079s)
	I0516 22:54:56.752520    1480 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:56.966589    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444
	W0516 22:54:58.062711    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444 returned with exit code 1
	I0516 22:54:58.062776    1480 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: (1.0960297s)
	W0516 22:54:58.063043    1480 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	
	W0516 22:54:58.063094    1480 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	I0516 22:54:58.063139    1480 fix.go:57] fixHost completed within 47.2263785s
	I0516 22:54:58.063139    1480 start.go:81] releasing machines lock for "kubernetes-upgrade-20220516225336-2444", held for 47.2263785s
	W0516 22:54:58.063139    1480 out.go:239] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220516225336-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220516225336-2444 container: docker volume create kubernetes-upgrade-20220516225336-2444 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220516225336-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220516225336-2444: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220516225336-2444': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220516225336-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220516225336-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220516225336-2444 container: docker volume create kubernetes-upgrade-20220516225336-2444 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220516225336-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220516225336-2444: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220516225336-2444': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220516225336-2444: read-only file system
	
	I0516 22:54:58.068294    1480 out.go:177] 
	W0516 22:54:58.070731    1480 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220516225336-2444 container: docker volume create kubernetes-upgrade-20220516225336-2444 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220516225336-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220516225336-2444: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220516225336-2444': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220516225336-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220516225336-2444 container: docker volume create kubernetes-upgrade-20220516225336-2444 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220516225336-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220516225336-2444: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220516225336-2444': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220516225336-2444: read-only file system
	
	W0516 22:54:58.070731    1480 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 22:54:58.070731    1480 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 22:54:58.074406    1480 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220516225336-2444 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: exit status 60
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220516225336-2444
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220516225336-2444: exit status 82 (22.6403516s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-20220516225336-2444"  ...
	* Stopping node "kubernetes-upgrade-20220516225336-2444"  ...
	* Stopping node "kubernetes-upgrade-20220516225336-2444"  ...
	* Stopping node "kubernetes-upgrade-20220516225336-2444"  ...
	* Stopping node "kubernetes-upgrade-20220516225336-2444"  ...
	* Stopping node "kubernetes-upgrade-20220516225336-2444"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:55:03.678914    2268 daemonize_windows.go:38] error terminating scheduled stop for profile kubernetes-upgrade-20220516225336-2444: stopping schedule-stop service for profile kubernetes-upgrade-20220516225336-2444: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220516225336-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516225336-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect kubernetes-upgrade-20220516225336-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_39.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:236: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220516225336-2444 failed: exit status 82
panic.go:482: *** TestKubernetesUpgrade FAILED at 2022-05-16 22:55:20.8219123 +0000 GMT m=+3588.488650501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220516225336-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect kubernetes-upgrade-20220516225336-2444: exit status 1 (1.1645267s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: kubernetes-upgrade-20220516225336-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20220516225336-2444 -n kubernetes-upgrade-20220516225336-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20220516225336-2444 -n kubernetes-upgrade-20220516225336-2444: exit status 7 (3.0207137s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:55:24.986264    4980 status.go:247] status error: host: state: unknown state "kubernetes-upgrade-20220516225336-2444": docker container inspect kubernetes-upgrade-20220516225336-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220516225336-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-20220516225336-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220516225336-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220516225336-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220516225336-2444: (8.6218344s)
--- FAIL: TestKubernetesUpgrade (116.81s)

                                                
                                    
x
+
TestMissingContainerUpgrade (371.33s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.1937321097.exe start -p missing-upgrade-20220516224650-2444 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.1937321097.exe start -p missing-upgrade-20220516224650-2444 --memory=2200 --driver=docker: exit status 78 (1m24.2351707s)

                                                
                                                
-- stdout --
	! [missing-upgrade-20220516224650-2444] minikube v1.9.1 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20220516224650-2444
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* docker "missing-upgrade-20220516224650-2444" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.25.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.25.2
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 10.06 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 47.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 81.38 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 103.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 116.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 130.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 138.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 167.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 206.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 241.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 266.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 270.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 288.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 298.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 308.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 322.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 347.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 380.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 399.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 411.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 424.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 444.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 476.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 513.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: c
reating: create kic node: creating volume for missing-upgrade-20220516224650-2444 container: output Error response from daemon: create missing-upgrade-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220516224650-2444': mkdir /var/lib/docker/volumes/missing-upgrade-20220516224650-2444: read-only file system
	: exit status 1
	* 
	* [DOCKER_READONLY] Failed to start docker container. "minikube start -p missing-upgrade-20220516224650-2444" may fix it. recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220516224650-2444 container: output Error response from daemon: create missing-upgrade-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220516224650-2444': mkdir /var/lib/docker/volumes/missing-upgrade-20220516224650-2444: read-only file system
	: exit status 1
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.1937321097.exe start -p missing-upgrade-20220516224650-2444 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.1937321097.exe start -p missing-upgrade-20220516224650-2444 --memory=2200 --driver=docker: exit status 78 (1m51.2016528s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220516224650-2444] minikube v1.9.1 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220516224650-2444
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "missing-upgrade-20220516224650-2444" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* docker "missing-upgrade-20220516224650-2444" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.05 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 44.33 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 64.51 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 99.44 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 147.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 188.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 229.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 270.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 310.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 395.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 435.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 480.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 522.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220516224650-2444 container: output Error response from daemon: create missing-upgrade-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220516224650-2444': mkdir /var/lib/docker/volumes/missing-upgrade-20220516224650-2444: read-only file system
	: exit status 1
	* 
	* [DOCKER_READONLY] Failed to start docker container. "minikube start -p missing-upgrade-20220516224650-2444" may fix it. recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220516224650-2444 container: output Error response from daemon: create missing-upgrade-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220516224650-2444': mkdir /var/lib/docker/volumes/missing-upgrade-20220516224650-2444: read-only file system
	: exit status 1
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.1937321097.exe start -p missing-upgrade-20220516224650-2444 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.1937321097.exe start -p missing-upgrade-20220516224650-2444 --memory=2200 --driver=docker: exit status 78 (2m39.630531s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220516224650-2444] minikube v1.9.1 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220516224650-2444
	* Pulling base image ...
	* docker "missing-upgrade-20220516224650-2444" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* docker "missing-upgrade-20220516224650-2444" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220516224650-2444 container: output Error response from daemon: create missing-upgrade-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220516224650-2444': mkdir /var/lib/docker/volumes/missing-upgrade-20220516224650-2444: read-only file system
	: exit status 1
	* 
	* [DOCKER_READONLY] Failed to start docker container. "minikube start -p missing-upgrade-20220516224650-2444" may fix it. recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220516224650-2444 container: output Error response from daemon: create missing-upgrade-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220516224650-2444': mkdir /var/lib/docker/volumes/missing-upgrade-20220516224650-2444: read-only file system
	: exit status 1
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 78
panic.go:482: *** TestMissingContainerUpgrade FAILED at 2022-05-16 22:52:49.0050201 +0000 GMT m=+3436.673024201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20220516224650-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect missing-upgrade-20220516224650-2444: exit status 1 (1.2075983s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: missing-upgrade-20220516224650-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p missing-upgrade-20220516224650-2444 -n missing-upgrade-20220516224650-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p missing-upgrade-20220516224650-2444 -n missing-upgrade-20220516224650-2444: exit status 7 (3.0114156s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:52:53.202507    1932 status.go:247] status error: host: state: unknown state "missing-upgrade-20220516224650-2444": docker container inspect missing-upgrade-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20220516224650-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "missing-upgrade-20220516224650-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "missing-upgrade-20220516224650-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-20220516224650-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-20220516224650-2444: (8.7411623s)
--- FAIL: TestMissingContainerUpgrade (371.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (331.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.808385619.exe start -p stopped-upgrade-20220516224650-2444 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.808385619.exe start -p stopped-upgrade-20220516224650-2444 --memory=2200 --vm-driver=docker: exit status 70 (56.6312024s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220516224650-2444] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig1708040804
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220516224650-2444 container: output Error response from daemon: create stopped-upgrade-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220516224650-2444': mkdir /var/lib/docker/volumes/stopped-upgrade-20220516224650-2444: read-only file system
	: exit status 1
	* docker "stopped-upgrade-20220516224650-2444" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220516224650-2444 container: output Error response from daemon: create stopped-upgrade-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220516224650-2444': mkdir /var/lib/docker/volumes/stopped-upgrade-20220516224650-2444: read-only file system
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20220516224650-2444", then "minikube start -p stopped-upgrade-20220516224650-2444 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 74.47 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 86.36 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 102.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 117.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 122.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 127.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 163.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 201.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 220.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 235.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 246.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 259.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 280.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 316.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 384.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 393.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 404.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 419.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 450.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 486.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 522.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220516224650-2444 container: output Error response from daemon: create stopped-upgrade-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220516224650-2444': mkdir /var/lib/docker/volumes/stopped-upgrade-20220516224650-2444: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.808385619.exe start -p stopped-upgrade-20220516224650-2444 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.808385619.exe start -p stopped-upgrade-20220516224650-2444 --memory=2200 --vm-driver=docker: exit status 70 (1m53.0037339s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220516224650-2444] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig1995210065
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "stopped-upgrade-20220516224650-2444" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220516224650-2444 container: output Error response from daemon: create stopped-upgrade-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220516224650-2444': mkdir /var/lib/docker/volumes/stopped-upgrade-20220516224650-2444: read-only file system
	: exit status 1
	* docker "stopped-upgrade-20220516224650-2444" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220516224650-2444 container: output Error response from daemon: create stopped-upgrade-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220516224650-2444': mkdir /var/lib/docker/volumes/stopped-upgrade-20220516224650-2444: read-only file system
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20220516224650-2444", then "minikube start -p stopped-upgrade-20220516224650-2444 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220516224650-2444 container: output Error response from daemon: create stopped-upgrade-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220516224650-2444': mkdir /var/lib/docker/volumes/stopped-upgrade-20220516224650-2444: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.808385619.exe start -p stopped-upgrade-20220516224650-2444 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.808385619.exe start -p stopped-upgrade-20220516224650-2444 --memory=2200 --vm-driver=docker: exit status 70 (2m38.952671s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220516224650-2444] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig3507494521
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "stopped-upgrade-20220516224650-2444" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220516224650-2444 container: output Error response from daemon: create stopped-upgrade-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220516224650-2444': mkdir /var/lib/docker/volumes/stopped-upgrade-20220516224650-2444: read-only file system
	: exit status 1
	* docker "stopped-upgrade-20220516224650-2444" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220516224650-2444 container: output Error response from daemon: create stopped-upgrade-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220516224650-2444': mkdir /var/lib/docker/volumes/stopped-upgrade-20220516224650-2444: read-only file system
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20220516224650-2444", then "minikube start -p stopped-upgrade-20220516224650-2444 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220516224650-2444 container: output Error response from daemon: create stopped-upgrade-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220516224650-2444': mkdir /var/lib/docker/volumes/stopped-upgrade-20220516224650-2444: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (331.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (86.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220516224650-2444 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220516224650-2444 --driver=docker: exit status 60 (1m22.1513721s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220516224650-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node NoKubernetes-20220516224650-2444 in cluster NoKubernetes-20220516224650-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "NoKubernetes-20220516224650-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:47:11.142256    1652 network_create.go:104] error while trying to create docker network NoKubernetes-20220516224650-2444 192.168.76.0/24: create docker network NoKubernetes-20220516224650-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220516224650-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 08e4ef11a0f8f4ae815629f1b08cc6fa4876aea84a671713cf665e8533f54428 (br-08e4ef11a0f8): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220516224650-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220516224650-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 08e4ef11a0f8f4ae815629f1b08cc6fa4876aea84a671713cf665e8533f54428 (br-08e4ef11a0f8): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220516224650-2444 container: docker volume create NoKubernetes-20220516224650-2444 --label name.minikube.sigs.k8s.io=NoKubernetes-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220516224650-2444': mkdir /var/lib/docker/volumes/NoKubernetes-20220516224650-2444: read-only file system
	
	E0516 22:47:59.420323    1652 network_create.go:104] error while trying to create docker network NoKubernetes-20220516224650-2444 192.168.85.0/24: create docker network NoKubernetes-20220516224650-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220516224650-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f3555d76fe342b729dba86c9351a6e1b9b17aa1d61c3c478635a75786f5e8c0a (br-f3555d76fe34): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220516224650-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220516224650-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f3555d76fe342b729dba86c9351a6e1b9b17aa1d61c3c478635a75786f5e8c0a (br-f3555d76fe34): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-20220516224650-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220516224650-2444 container: docker volume create NoKubernetes-20220516224650-2444 --label name.minikube.sigs.k8s.io=NoKubernetes-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220516224650-2444': mkdir /var/lib/docker/volumes/NoKubernetes-20220516224650-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220516224650-2444 container: docker volume create NoKubernetes-20220516224650-2444 --label name.minikube.sigs.k8s.io=NoKubernetes-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220516224650-2444': mkdir /var/lib/docker/volumes/NoKubernetes-20220516224650-2444: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220516224650-2444 --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartWithK8s]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220516224650-2444

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220516224650-2444: exit status 1 (1.1473741s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: NoKubernetes-20220516224650-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220516224650-2444 -n NoKubernetes-20220516224650-2444

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220516224650-2444 -n NoKubernetes-20220516224650-2444: exit status 7 (3.0053507s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:48:17.580629    4992 status.go:247] status error: host: state: unknown state "NoKubernetes-20220516224650-2444": docker container inspect NoKubernetes-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220516224650-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20220516224650-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (86.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (120.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220516224650-2444 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220516224650-2444 --no-kubernetes --driver=docker: exit status 60 (1m56.8523104s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220516224650-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes NoKubernetes-20220516224650-2444 in cluster NoKubernetes-20220516224650-2444
	* Pulling base image ...
	* docker "NoKubernetes-20220516224650-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "NoKubernetes-20220516224650-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:49:08.856193    7024 network_create.go:104] error while trying to create docker network NoKubernetes-20220516224650-2444 192.168.76.0/24: create docker network NoKubernetes-20220516224650-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220516224650-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ab0b02f9e90e8b5ce8838c9a5c243e0e0526b332e6b137b42698348aeb8e31f1 (br-ab0b02f9e90e): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220516224650-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220516224650-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ab0b02f9e90e8b5ce8838c9a5c243e0e0526b332e6b137b42698348aeb8e31f1 (br-ab0b02f9e90e): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220516224650-2444 container: docker volume create NoKubernetes-20220516224650-2444 --label name.minikube.sigs.k8s.io=NoKubernetes-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220516224650-2444': mkdir /var/lib/docker/volumes/NoKubernetes-20220516224650-2444: read-only file system
	
	E0516 22:50:00.512598    7024 network_create.go:104] error while trying to create docker network NoKubernetes-20220516224650-2444 192.168.85.0/24: create docker network NoKubernetes-20220516224650-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220516224650-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 755cbc4cf711f96c421c8c6a7a5fa6edfbf0a2f74dfb2925e27829ddf23937b5 (br-755cbc4cf711): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220516224650-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220516224650-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 755cbc4cf711f96c421c8c6a7a5fa6edfbf0a2f74dfb2925e27829ddf23937b5 (br-755cbc4cf711): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-20220516224650-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220516224650-2444 container: docker volume create NoKubernetes-20220516224650-2444 --label name.minikube.sigs.k8s.io=NoKubernetes-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220516224650-2444': mkdir /var/lib/docker/volumes/NoKubernetes-20220516224650-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220516224650-2444 container: docker volume create NoKubernetes-20220516224650-2444 --label name.minikube.sigs.k8s.io=NoKubernetes-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220516224650-2444': mkdir /var/lib/docker/volumes/NoKubernetes-20220516224650-2444: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220516224650-2444 --no-kubernetes --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartWithStopK8s]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220516224650-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220516224650-2444: exit status 1 (1.1054874s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: NoKubernetes-20220516224650-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220516224650-2444 -n NoKubernetes-20220516224650-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220516224650-2444 -n NoKubernetes-20220516224650-2444: exit status 7 (2.8675023s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:50:18.414128    6784 status.go:247] status error: host: state: unknown state "NoKubernetes-20220516224650-2444": docker container inspect NoKubernetes-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220516224650-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20220516224650-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (120.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (96.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220516224650-2444 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220516224650-2444 --no-kubernetes --driver=docker: exit status 1 (1m32.2259146s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220516224650-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes NoKubernetes-20220516224650-2444 in cluster NoKubernetes-20220516224650-2444
	* Pulling base image ...
	* docker "NoKubernetes-20220516224650-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "NoKubernetes-20220516224650-2444" container is missing, will recreate.

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:51:08.811787    7504 network_create.go:104] error while trying to create docker network NoKubernetes-20220516224650-2444 192.168.76.0/24: create docker network NoKubernetes-20220516224650-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220516224650-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3ada12c633f8bf3e0ef3a1dde277f0170f413e41439cabce883433dede5dc118 (br-3ada12c633f8): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220516224650-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220516224650-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3ada12c633f8bf3e0ef3a1dde277f0170f413e41439cabce883433dede5dc118 (br-3ada12c633f8): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220516224650-2444 container: docker volume create NoKubernetes-20220516224650-2444 --label name.minikube.sigs.k8s.io=NoKubernetes-20220516224650-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220516224650-2444: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220516224650-2444': mkdir /var/lib/docker/volumes/NoKubernetes-20220516224650-2444: read-only file system
	

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220516224650-2444 --no-kubernetes --driver=docker" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220516224650-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220516224650-2444: exit status 1 (1.1495133s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: NoKubernetes-20220516224650-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220516224650-2444 -n NoKubernetes-20220516224650-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220516224650-2444 -n NoKubernetes-20220516224650-2444: exit status 7 (2.8634161s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:51:54.674714    3936 status.go:247] status error: host: state: unknown state "NoKubernetes-20220516224650-2444": docker container inspect NoKubernetes-20220516224650-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220516224650-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20220516224650-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/Start (96.25s)

                                                
                                    
x
+
TestPause/serial/Start (85.38s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220516225202-2444 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p pause-20220516225202-2444 --memory=2048 --install-addons=false --wait=all --driver=docker: exit status 60 (1m21.0405698s)

                                                
                                                
-- stdout --
	* [pause-20220516225202-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node pause-20220516225202-2444 in cluster pause-20220516225202-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "pause-20220516225202-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:52:21.061766    6028 network_create.go:104] error while trying to create docker network pause-20220516225202-2444 192.168.76.0/24: create docker network pause-20220516225202-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4c855ee8fddabd945acb86883648f344ea93fde523e5e9bd6de34e1066bd9eeb (br-4c855ee8fdda): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network pause-20220516225202-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4c855ee8fddabd945acb86883648f344ea93fde523e5e9bd6de34e1066bd9eeb (br-4c855ee8fdda): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for pause-20220516225202-2444 container: docker volume create pause-20220516225202-2444 --label name.minikube.sigs.k8s.io=pause-20220516225202-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create pause-20220516225202-2444: error while creating volume root path '/var/lib/docker/volumes/pause-20220516225202-2444': mkdir /var/lib/docker/volumes/pause-20220516225202-2444: read-only file system
	
	E0516 22:53:09.345896    6028 network_create.go:104] error while trying to create docker network pause-20220516225202-2444 192.168.85.0/24: create docker network pause-20220516225202-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4da1a1c0c1810dc7357ccee924feaca00ebe3340b4701ee7d2c7a1f30c4cdf97 (br-4da1a1c0c181): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network pause-20220516225202-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4da1a1c0c1810dc7357ccee924feaca00ebe3340b4701ee7d2c7a1f30c4cdf97 (br-4da1a1c0c181): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p pause-20220516225202-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for pause-20220516225202-2444 container: docker volume create pause-20220516225202-2444 --label name.minikube.sigs.k8s.io=pause-20220516225202-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create pause-20220516225202-2444: error while creating volume root path '/var/lib/docker/volumes/pause-20220516225202-2444': mkdir /var/lib/docker/volumes/pause-20220516225202-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for pause-20220516225202-2444 container: docker volume create pause-20220516225202-2444 --label name.minikube.sigs.k8s.io=pause-20220516225202-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create pause-20220516225202-2444: error while creating volume root path '/var/lib/docker/volumes/pause-20220516225202-2444': mkdir /var/lib/docker/volumes/pause-20220516225202-2444: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p pause-20220516225202-2444 --memory=2048 --install-addons=false --wait=all --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220516225202-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20220516225202-2444: exit status 1 (1.2724409s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: pause-20220516225202-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220516225202-2444 -n pause-20220516225202-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220516225202-2444 -n pause-20220516225202-2444: exit status 7 (3.0514753s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:53:28.291506    7580 status.go:247] status error: host: state: unknown state "pause-20220516225202-2444": docker container inspect pause-20220516225202-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220516225202-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20220516225202-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/Start (85.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220516224650-2444
version_upgrade_test.go:213: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220516224650-2444: exit status 80 (3.2059509s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------|------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                                  Args                                  |                 Profile                  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------|------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| delete  | -p                                                                     | download-only-20220516215532-2444        | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:56 GMT | 16 May 22 21:56 GMT |
	|         | download-only-20220516215532-2444                                      |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | download-only-20220516215532-2444        | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:56 GMT | 16 May 22 21:56 GMT |
	|         | download-only-20220516215532-2444                                      |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | download-docker-20220516215629-2444      | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:57 GMT | 16 May 22 21:57 GMT |
	|         | download-docker-20220516215629-2444                                    |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | binary-mirror-20220516215715-2444        | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:57 GMT | 16 May 22 21:57 GMT |
	|         | binary-mirror-20220516215715-2444                                      |                                          |                   |                |                     |                     |
	| delete  | -p addons-20220516215732-2444                                          | addons-20220516215732-2444               | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 21:58 GMT | 16 May 22 21:58 GMT |
	| delete  | -p nospam-20220516215858-2444                                          | nospam-20220516215858-2444               | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:02 GMT | 16 May 22 22:02 GMT |
	| cache   | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:05 GMT | 16 May 22 22:05 GMT |
	|         | cache add k8s.gcr.io/pause:3.1                                         |                                          |                   |                |                     |                     |
	| cache   | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:05 GMT | 16 May 22 22:05 GMT |
	|         | cache add k8s.gcr.io/pause:3.3                                         |                                          |                   |                |                     |                     |
	| cache   | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:05 GMT | 16 May 22 22:05 GMT |
	|         | cache add                                                              |                                          |                   |                |                     |                     |
	|         | k8s.gcr.io/pause:latest                                                |                                          |                   |                |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.3                                            | minikube                                 | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
	| cache   | list                                                                   | minikube                                 | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
	| cache   | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
	|         | cache reload                                                           |                                          |                   |                |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.1                                            | minikube                                 | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
	| cache   | delete k8s.gcr.io/pause:latest                                         | minikube                                 | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:06 GMT | 16 May 22 22:06 GMT |
	| config  | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:08 GMT | 16 May 22 22:08 GMT |
	|         | config unset cpus                                                      |                                          |                   |                |                     |                     |
	| config  | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:08 GMT | 16 May 22 22:08 GMT |
	|         | config set cpus 2                                                      |                                          |                   |                |                     |                     |
	| config  | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:08 GMT | 16 May 22 22:08 GMT |
	|         | config get cpus                                                        |                                          |                   |                |                     |                     |
	| config  | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:08 GMT | 16 May 22 22:08 GMT |
	|         | config unset cpus                                                      |                                          |                   |                |                     |                     |
	| addons  | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:08 GMT | 16 May 22 22:08 GMT |
	|         | addons list                                                            |                                          |                   |                |                     |                     |
	| addons  | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:08 GMT | 16 May 22 22:08 GMT |
	|         | addons list -o json                                                    |                                          |                   |                |                     |                     |
	| profile | list --output json                                                     | minikube                                 | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:08 GMT | 16 May 22 22:08 GMT |
	| profile | list                                                                   | minikube                                 | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:08 GMT | 16 May 22 22:08 GMT |
	| profile | list -l                                                                | minikube                                 | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:08 GMT | 16 May 22 22:08 GMT |
	| profile | list -o json                                                           | minikube                                 | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:08 GMT | 16 May 22 22:08 GMT |
	| profile | list -o json --light                                                   | minikube                                 | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:08 GMT | 16 May 22 22:08 GMT |
	| image   | functional-20220516220221-2444 image load --daemon                     | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:09 GMT | 16 May 22 22:09 GMT |
	|         | gcr.io/google-containers/addon-resizer:functional-20220516220221-2444  |                                          |                   |                |                     |                     |
	| image   | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:09 GMT | 16 May 22 22:09 GMT |
	|         | image ls                                                               |                                          |                   |                |                     |                     |
	| image   | functional-20220516220221-2444 image load --daemon                     | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:09 GMT | 16 May 22 22:09 GMT |
	|         | gcr.io/google-containers/addon-resizer:functional-20220516220221-2444  |                                          |                   |                |                     |                     |
	| image   | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:09 GMT | 16 May 22 22:09 GMT |
	|         | image ls                                                               |                                          |                   |                |                     |                     |
	| image   | functional-20220516220221-2444 image save                              | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:09 GMT | 16 May 22 22:09 GMT |
	|         | gcr.io/google-containers/addon-resizer:functional-20220516220221-2444  |                                          |                   |                |                     |                     |
	|         | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar |                                          |                   |                |                     |                     |
	| image   | functional-20220516220221-2444 image rm                                | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:09 GMT | 16 May 22 22:09 GMT |
	|         | gcr.io/google-containers/addon-resizer:functional-20220516220221-2444  |                                          |                   |                |                     |                     |
	| image   | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:09 GMT | 16 May 22 22:09 GMT |
	|         | image ls                                                               |                                          |                   |                |                     |                     |
	| image   | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:09 GMT | 16 May 22 22:09 GMT |
	|         | image ls --format short                                                |                                          |                   |                |                     |                     |
	| image   | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:09 GMT | 16 May 22 22:09 GMT |
	|         | image ls --format yaml                                                 |                                          |                   |                |                     |                     |
	| image   | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:09 GMT | 16 May 22 22:09 GMT |
	|         | image ls --format json                                                 |                                          |                   |                |                     |                     |
	| image   | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:09 GMT | 16 May 22 22:09 GMT |
	|         | image ls --format table                                                |                                          |                   |                |                     |                     |
	| image   | functional-20220516220221-2444 image build -t                          | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:09 GMT | 16 May 22 22:09 GMT |
	|         | localhost/my-image:functional-20220516220221-2444                      |                                          |                   |                |                     |                     |
	|         | testdata\build                                                         |                                          |                   |                |                     |                     |
	| image   | functional-20220516220221-2444                                         | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:09 GMT | 16 May 22 22:09 GMT |
	|         | image ls                                                               |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | functional-20220516220221-2444           | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:14 GMT | 16 May 22 22:14 GMT |
	|         | functional-20220516220221-2444                                         |                                          |                   |                |                     |                     |
	| addons  | ingress-addon-legacy-20220516221408-2444                               | ingress-addon-legacy-20220516221408-2444 | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:15 GMT | 16 May 22 22:15 GMT |
	|         | addons enable ingress-dns                                              |                                          |                   |                |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | ingress-addon-legacy-20220516221408-2444 | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:15 GMT | 16 May 22 22:15 GMT |
	|         | ingress-addon-legacy-20220516221408-2444                               |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | json-output-20220516221549-2444          | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:17 GMT | 16 May 22 22:17 GMT |
	|         | json-output-20220516221549-2444                                        |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | json-output-error-20220516221743-2444    | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:17 GMT | 16 May 22 22:17 GMT |
	|         | json-output-error-20220516221743-2444                                  |                                          |                   |                |                     |                     |
	| start   | -p                                                                     | docker-network-20220516221751-2444       | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:17 GMT | 16 May 22 22:21 GMT |
	|         | docker-network-20220516221751-2444                                     |                                          |                   |                |                     |                     |
	|         | --network=                                                             |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | docker-network-20220516221751-2444       | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:21 GMT | 16 May 22 22:21 GMT |
	|         | docker-network-20220516221751-2444                                     |                                          |                   |                |                     |                     |
	| start   | -p                                                                     | docker-network-20220516222155-2444       | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:21 GMT | 16 May 22 22:25 GMT |
	|         | docker-network-20220516222155-2444                                     |                                          |                   |                |                     |                     |
	|         | --network=bridge                                                       |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | docker-network-20220516222155-2444       | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:25 GMT | 16 May 22 22:25 GMT |
	|         | docker-network-20220516222155-2444                                     |                                          |                   |                |                     |                     |
	| start   | -p                                                                     | custom-subnet-20220516222549-2444        | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:25 GMT | 16 May 22 22:29 GMT |
	|         | custom-subnet-20220516222549-2444                                      |                                          |                   |                |                     |                     |
	|         | --subnet=192.168.60.0/24                                               |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | custom-subnet-20220516222549-2444        | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:29 GMT | 16 May 22 22:29 GMT |
	|         | custom-subnet-20220516222549-2444                                      |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | mount-start-2-20220516222944-2444        | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:31 GMT | 16 May 22 22:31 GMT |
	|         | mount-start-2-20220516222944-2444                                      |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | mount-start-1-20220516222944-2444        | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:31 GMT | 16 May 22 22:31 GMT |
	|         | mount-start-1-20220516222944-2444                                      |                                          |                   |                |                     |                     |
	| profile | list --output json                                                     | minikube                                 | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:33 GMT | 16 May 22 22:33 GMT |
	| delete  | -p                                                                     | multinode-20220516223121-2444-m02        | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:41 GMT | 16 May 22 22:41 GMT |
	|         | multinode-20220516223121-2444-m02                                      |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | multinode-20220516223121-2444            | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:41 GMT | 16 May 22 22:41 GMT |
	|         | multinode-20220516223121-2444                                          |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | test-preload-20220516224147-2444         | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:43 GMT | 16 May 22 22:43 GMT |
	|         | test-preload-20220516224147-2444                                       |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | scheduled-stop-20220516224317-2444       | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:44 GMT | 16 May 22 22:44 GMT |
	|         | scheduled-stop-20220516224317-2444                                     |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | skaffold-20220516224447-2444             | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:46 GMT | 16 May 22 22:46 GMT |
	|         | skaffold-20220516224447-2444                                           |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | insufficient-storage-20220516224618-2444 | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:46 GMT | 16 May 22 22:46 GMT |
	|         | insufficient-storage-20220516224618-2444                               |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | offline-docker-20220516224650-2444       | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:48 GMT | 16 May 22 22:48 GMT |
	|         | offline-docker-20220516224650-2444                                     |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | NoKubernetes-20220516224650-2444         | minikube2\jenkins | v1.26.0-beta.0 | 16 May 22 22:51 GMT | 16 May 22 22:52 GMT |
	|         | NoKubernetes-20220516224650-2444                                       |                                          |                   |                |                     |                     |
	|---------|------------------------------------------------------------------------|------------------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/16 22:52:03
	Running on machine: minikube2
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0516 22:52:03.178211    6028 out.go:296] Setting OutFile to fd 1756 ...
	I0516 22:52:03.232867    6028 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:52:03.232867    6028 out.go:309] Setting ErrFile to fd 1768...
	I0516 22:52:03.232867    6028 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:52:03.245871    6028 out.go:303] Setting JSON to false
	I0516 22:52:03.248866    6028 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4635,"bootTime":1652736888,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:52:03.248866    6028 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:52:03.264105    6028 out.go:177] * [pause-20220516225202-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:52:03.269267    6028 notify.go:193] Checking for updates...
	I0516 22:52:03.272833    6028 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:52:03.275139    6028 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:52:03.278535    6028 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:52:03.280808    6028 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:52:03.283742    6028 config.go:178] Loaded profile config "missing-upgrade-20220516224650-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0516 22:52:03.284420    6028 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:52:03.284420    6028 config.go:178] Loaded profile config "running-upgrade-20220516224826-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0516 22:52:03.285099    6028 config.go:178] Loaded profile config "stopped-upgrade-20220516224650-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0516 22:52:03.285099    6028 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:52:05.821551    6028 docker.go:137] docker version: linux-20.10.14
	I0516 22:52:05.829032    6028 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:52:07.932081    6028 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1028517s)
	I0516 22:52:07.933539    6028 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:52:06.8633208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:52:07.938302    6028 out.go:177] * Using the docker driver based on user configuration
	I0516 22:52:07.940437    6028 start.go:284] selected driver: docker
	I0516 22:52:07.940437    6028 start.go:806] validating driver "docker" against <nil>
	I0516 22:52:07.940437    6028 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:52:08.063108    6028 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:52:10.092933    6028 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0298085s)
	I0516 22:52:10.092933    6028 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:52:09.0559773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:52:10.093794    6028 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 22:52:10.094360    6028 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 22:52:10.099439    6028 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 22:52:10.102319    6028 cni.go:95] Creating CNI manager for ""
	I0516 22:52:10.102319    6028 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 22:52:10.102319    6028 start_flags.go:306] config:
	{Name:pause-20220516225202-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:pause-20220516225202-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:52:10.104626    6028 out.go:177] * Starting control plane node pause-20220516225202-2444 in cluster pause-20220516225202-2444
	I0516 22:52:10.110302    6028 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:52:10.113949    6028 out.go:177] * Pulling base image ...
	I0516 22:52:10.116293    6028 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 22:52:10.116821    6028 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:52:10.116821    6028 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 22:52:10.116821    6028 cache.go:57] Caching tarball of preloaded images
	I0516 22:52:10.117342    6028 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 22:52:10.117387    6028 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 22:52:10.117387    6028 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-20220516225202-2444\config.json ...
	I0516 22:52:10.117387    6028 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-20220516225202-2444\config.json: {Name:mk898f3304ddd437f560b2e3390c6a93cc02f295 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 22:52:11.170282    6028 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:52:11.170406    6028 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:52:11.170406    6028 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:52:11.170406    6028 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:52:11.170406    6028 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:52:11.170406    6028 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:52:11.170949    6028 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:52:11.170949    6028 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:52:11.170949    6028 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:52:13.420800    6028 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:52:13.420939    6028 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:52:13.421003    6028 start.go:352] acquiring machines lock for pause-20220516225202-2444: {Name:mke7fa6650bca63b90a260cf89e06da55c71b0ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:52:13.421003    6028 start.go:356] acquired machines lock for "pause-20220516225202-2444" in 0s
	I0516 22:52:13.421003    6028 start.go:91] Provisioning new machine with config: &{Name:pause-20220516225202-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:pause-20220516225202-2444 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 22:52:13.421615    6028 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:52:13.427273    6028 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 22:52:13.427941    6028 start.go:165] libmachine.API.Create for "pause-20220516225202-2444" (driver="docker")
	I0516 22:52:13.427941    6028 client.go:168] LocalClient.Create starting
	I0516 22:52:13.428623    6028 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:52:13.428623    6028 main.go:134] libmachine: Decoding PEM data...
	I0516 22:52:13.428623    6028 main.go:134] libmachine: Parsing certificate...
	I0516 22:52:13.429287    6028 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:52:13.429287    6028 main.go:134] libmachine: Decoding PEM data...
	I0516 22:52:13.429287    6028 main.go:134] libmachine: Parsing certificate...
	I0516 22:52:13.440645    6028 cli_runner.go:164] Run: docker network inspect pause-20220516225202-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:52:14.510021    6028 cli_runner.go:211] docker network inspect pause-20220516225202-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:52:14.510021    6028 cli_runner.go:217] Completed: docker network inspect pause-20220516225202-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.069227s)
	I0516 22:52:14.517867    6028 network_create.go:272] running [docker network inspect pause-20220516225202-2444] to gather additional debugging logs...
	I0516 22:52:14.517867    6028 cli_runner.go:164] Run: docker network inspect pause-20220516225202-2444
	W0516 22:52:15.608627    6028 cli_runner.go:211] docker network inspect pause-20220516225202-2444 returned with exit code 1
	I0516 22:52:15.608627    6028 cli_runner.go:217] Completed: docker network inspect pause-20220516225202-2444: (1.0907507s)
	I0516 22:52:15.608627    6028 network_create.go:275] error running [docker network inspect pause-20220516225202-2444]: docker network inspect pause-20220516225202-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: pause-20220516225202-2444
	I0516 22:52:15.608627    6028 network_create.go:277] output of [docker network inspect pause-20220516225202-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: pause-20220516225202-2444
	
	** /stderr **
	I0516 22:52:15.616612    6028 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:52:16.671400    6028 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0546232s)
	I0516 22:52:16.694157    6028 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000790158] misses:0}
	I0516 22:52:16.695051    6028 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:52:16.695051    6028 network_create.go:115] attempt to create docker network pause-20220516225202-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:52:16.702043    6028 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444
	W0516 22:52:17.727129    6028 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444 returned with exit code 1
	I0516 22:52:17.727129    6028 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444: (1.0248546s)
	W0516 22:52:17.727260    6028 network_create.go:107] failed to create docker network pause-20220516225202-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:52:17.746586    6028 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000790158] amended:false}} dirty:map[] misses:0}
	I0516 22:52:17.746586    6028 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:52:17.765369    6028 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000790158] amended:true}} dirty:map[192.168.49.0:0xc000790158 192.168.58.0:0xc000f863e8] misses:0}
	I0516 22:52:17.765369    6028 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:52:17.765369    6028 network_create.go:115] attempt to create docker network pause-20220516225202-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:52:17.773640    6028 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444
	W0516 22:52:18.831318    6028 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444 returned with exit code 1
	I0516 22:52:18.831318    6028 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444: (1.0574373s)
	W0516 22:52:18.831318    6028 network_create.go:107] failed to create docker network pause-20220516225202-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:52:18.851363    6028 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000790158] amended:true}} dirty:map[192.168.49.0:0xc000790158 192.168.58.0:0xc000f863e8] misses:1}
	I0516 22:52:18.851363    6028 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:52:18.892243    6028 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000790158] amended:true}} dirty:map[192.168.49.0:0xc000790158 192.168.58.0:0xc000f863e8 192.168.67.0:0xc000790298] misses:1}
	I0516 22:52:18.892243    6028 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:52:18.892243    6028 network_create.go:115] attempt to create docker network pause-20220516225202-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:52:18.901355    6028 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444
	W0516 22:52:19.954047    6028 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444 returned with exit code 1
	I0516 22:52:19.954047    6028 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444: (1.0526827s)
	W0516 22:52:19.954047    6028 network_create.go:107] failed to create docker network pause-20220516225202-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:52:19.972236    6028 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000790158] amended:true}} dirty:map[192.168.49.0:0xc000790158 192.168.58.0:0xc000f863e8 192.168.67.0:0xc000790298] misses:2}
	I0516 22:52:19.972236    6028 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:52:19.989998    6028 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000790158] amended:true}} dirty:map[192.168.49.0:0xc000790158 192.168.58.0:0xc000f863e8 192.168.67.0:0xc000790298 192.168.76.0:0xc000790358] misses:2}
	I0516 22:52:19.989998    6028 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:52:19.989998    6028 network_create.go:115] attempt to create docker network pause-20220516225202-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:52:20.001258    6028 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444
	W0516 22:52:21.061634    6028 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444 returned with exit code 1
	I0516 22:52:21.061731    6028 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444: (1.060367s)
	E0516 22:52:21.061766    6028 network_create.go:104] error while trying to create docker network pause-20220516225202-2444 192.168.76.0/24: create docker network pause-20220516225202-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4c855ee8fddabd945acb86883648f344ea93fde523e5e9bd6de34e1066bd9eeb (br-4c855ee8fdda): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:52:21.061766    6028 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network pause-20220516225202-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220516225202-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4c855ee8fddabd945acb86883648f344ea93fde523e5e9bd6de34e1066bd9eeb (br-4c855ee8fdda): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:52:21.077499    6028 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:52:22.097426    6028 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0198903s)
	I0516 22:52:22.106536    6028 cli_runner.go:164] Run: docker volume create pause-20220516225202-2444 --label name.minikube.sigs.k8s.io=pause-20220516225202-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:52:23.135917    6028 cli_runner.go:211] docker volume create pause-20220516225202-2444 --label name.minikube.sigs.k8s.io=pause-20220516225202-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:52:23.135917    6028 cli_runner.go:217] Completed: docker volume create pause-20220516225202-2444 --label name.minikube.sigs.k8s.io=pause-20220516225202-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0293724s)
	I0516 22:52:23.135917    6028 client.go:171] LocalClient.Create took 9.7078951s
	
	* 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "stopped-upgrade-20220516224650-2444": docker container inspect stopped-upgrade-20220516224650-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: stopped-upgrade-20220516224650-2444
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_703.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:215: `minikube logs` after upgrade to HEAD from v1.9.0 failed: exit status 80
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (3.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (86.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220516225533-2444 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-20220516225533-2444 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: exit status 60 (1m22.1067743s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220516225533-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node old-k8s-version-20220516225533-2444 in cluster old-k8s-version-20220516225533-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-20220516225533-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:55:33.867454    8296 out.go:296] Setting OutFile to fd 1712 ...
	I0516 22:55:33.933517    8296 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:55:33.933517    8296 out.go:309] Setting ErrFile to fd 1892...
	I0516 22:55:33.933517    8296 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:55:33.945753    8296 out.go:303] Setting JSON to false
	I0516 22:55:33.948256    8296 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4846,"bootTime":1652736887,"procs":153,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:55:33.948256    8296 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:55:33.953697    8296 out.go:177] * [old-k8s-version-20220516225533-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:55:33.958157    8296 notify.go:193] Checking for updates...
	I0516 22:55:33.960737    8296 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:55:33.962939    8296 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:55:33.965366    8296 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:55:33.968004    8296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:55:33.970511    8296 config.go:178] Loaded profile config "cert-expiration-20220516225440-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:55:33.970511    8296 config.go:178] Loaded profile config "cert-options-20220516225447-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:55:33.971509    8296 config.go:178] Loaded profile config "docker-flags-20220516225417-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:55:33.971509    8296 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:55:33.971509    8296 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:55:36.732440    8296 docker.go:137] docker version: linux-20.10.14
	I0516 22:55:36.741703    8296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:55:38.943307    8296 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2015858s)
	I0516 22:55:38.943307    8296 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:55:37.8222725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:55:38.946990    8296 out.go:177] * Using the docker driver based on user configuration
	I0516 22:55:38.949951    8296 start.go:284] selected driver: docker
	I0516 22:55:38.949951    8296 start.go:806] validating driver "docker" against <nil>
	I0516 22:55:38.949951    8296 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:55:39.036118    8296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:55:41.198005    8296 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1616007s)
	I0516 22:55:41.198585    8296 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:55:40.1121829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:55:41.198658    8296 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 22:55:41.199614    8296 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 22:55:41.202955    8296 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 22:55:41.204948    8296 cni.go:95] Creating CNI manager for ""
	I0516 22:55:41.204948    8296 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 22:55:41.204948    8296 start_flags.go:306] config:
	{Name:old-k8s-version-20220516225533-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220516225533-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:55:41.207980    8296 out.go:177] * Starting control plane node old-k8s-version-20220516225533-2444 in cluster old-k8s-version-20220516225533-2444
	I0516 22:55:41.211629    8296 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:55:41.213706    8296 out.go:177] * Pulling base image ...
	I0516 22:55:41.216080    8296 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0516 22:55:41.216080    8296 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:55:41.216080    8296 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0516 22:55:41.216080    8296 cache.go:57] Caching tarball of preloaded images
	I0516 22:55:41.217247    8296 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 22:55:41.217247    8296 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0516 22:55:41.218273    8296 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-20220516225533-2444\config.json ...
	I0516 22:55:41.218273    8296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-20220516225533-2444\config.json: {Name:mkaad92d4ab99235a7070017fc9d653d141a4da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 22:55:42.323055    8296 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:55:42.323345    8296 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:55:42.323601    8296 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:55:42.323601    8296 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:55:42.323601    8296 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:55:42.323601    8296 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:55:42.323601    8296 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:55:42.323601    8296 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:55:42.323601    8296 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:55:44.650360    8296 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:55:44.650410    8296 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:55:44.650567    8296 start.go:352] acquiring machines lock for old-k8s-version-20220516225533-2444: {Name:mk5023de8a7eabf3a3502247916ec67ae4aced29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:55:44.650828    8296 start.go:356] acquired machines lock for "old-k8s-version-20220516225533-2444" in 217.4µs
	I0516 22:55:44.651131    8296 start.go:91] Provisioning new machine with config: &{Name:old-k8s-version-20220516225533-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220516225533-2444 Namespace:default APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 22:55:44.651271    8296 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:55:44.655614    8296 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:55:44.656289    8296 start.go:165] libmachine.API.Create for "old-k8s-version-20220516225533-2444" (driver="docker")
	I0516 22:55:44.656341    8296 client.go:168] LocalClient.Create starting
	I0516 22:55:44.656400    8296 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:55:44.657154    8296 main.go:134] libmachine: Decoding PEM data...
	I0516 22:55:44.657154    8296 main.go:134] libmachine: Parsing certificate...
	I0516 22:55:44.657389    8296 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:55:44.657389    8296 main.go:134] libmachine: Decoding PEM data...
	I0516 22:55:44.657389    8296 main.go:134] libmachine: Parsing certificate...
	I0516 22:55:44.666981    8296 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:55:45.815062    8296 cli_runner.go:211] docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:55:45.815062    8296 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1480716s)
	I0516 22:55:45.825060    8296 network_create.go:272] running [docker network inspect old-k8s-version-20220516225533-2444] to gather additional debugging logs...
	I0516 22:55:45.825060    8296 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220516225533-2444
	W0516 22:55:46.976344    8296 cli_runner.go:211] docker network inspect old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:55:46.976344    8296 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220516225533-2444: (1.151275s)
	I0516 22:55:46.976344    8296 network_create.go:275] error running [docker network inspect old-k8s-version-20220516225533-2444]: docker network inspect old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220516225533-2444
	I0516 22:55:46.976344    8296 network_create.go:277] output of [docker network inspect old-k8s-version-20220516225533-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220516225533-2444
	
	** /stderr **
	I0516 22:55:46.984776    8296 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:55:48.097764    8296 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1129785s)
	I0516 22:55:48.130875    8296 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000698130] misses:0}
	I0516 22:55:48.131412    8296 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:55:48.131565    8296 network_create.go:115] attempt to create docker network old-k8s-version-20220516225533-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:55:48.139977    8296 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444
	W0516 22:55:49.232958    8296 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:55:49.233051    8296 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: (1.0929274s)
	W0516 22:55:49.233093    8296 network_create.go:107] failed to create docker network old-k8s-version-20220516225533-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:55:49.252299    8296 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000698130] amended:false}} dirty:map[] misses:0}
	I0516 22:55:49.252299    8296 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:55:49.273296    8296 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000698130] amended:true}} dirty:map[192.168.49.0:0xc000698130 192.168.58.0:0xc0000063c0] misses:0}
	I0516 22:55:49.273921    8296 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:55:49.273921    8296 network_create.go:115] attempt to create docker network old-k8s-version-20220516225533-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:55:49.282799    8296 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444
	W0516 22:55:50.410011    8296 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:55:50.410284    8296 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: (1.1272023s)
	W0516 22:55:50.410436    8296 network_create.go:107] failed to create docker network old-k8s-version-20220516225533-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:55:50.431442    8296 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000698130] amended:true}} dirty:map[192.168.49.0:0xc000698130 192.168.58.0:0xc0000063c0] misses:1}
	I0516 22:55:50.431442    8296 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:55:50.449833    8296 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000698130] amended:true}} dirty:map[192.168.49.0:0xc000698130 192.168.58.0:0xc0000063c0 192.168.67.0:0xc000698280] misses:1}
	I0516 22:55:50.449833    8296 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:55:50.449833    8296 network_create.go:115] attempt to create docker network old-k8s-version-20220516225533-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:55:50.459784    8296 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444
	W0516 22:55:51.608991    8296 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:55:51.609020    8296 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: (1.1490413s)
	W0516 22:55:51.609108    8296 network_create.go:107] failed to create docker network old-k8s-version-20220516225533-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:55:51.626876    8296 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000698130] amended:true}} dirty:map[192.168.49.0:0xc000698130 192.168.58.0:0xc0000063c0 192.168.67.0:0xc000698280] misses:2}
	I0516 22:55:51.626876    8296 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:55:51.644898    8296 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000698130] amended:true}} dirty:map[192.168.49.0:0xc000698130 192.168.58.0:0xc0000063c0 192.168.67.0:0xc000698280 192.168.76.0:0xc000698318] misses:2}
	I0516 22:55:51.644898    8296 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:55:51.644898    8296 network_create.go:115] attempt to create docker network old-k8s-version-20220516225533-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:55:51.654868    8296 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444
	W0516 22:55:52.750654    8296 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:55:52.750654    8296 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: (1.0957766s)
	E0516 22:55:52.750654    8296 network_create.go:104] error while trying to create docker network old-k8s-version-20220516225533-2444 192.168.76.0/24: create docker network old-k8s-version-20220516225533-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5e78799ab05bbb77b2f41ee3f6c8279b55a06cf6e9bf90471d8d5a92d060cd24 (br-5e78799ab05b): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:55:52.750654    8296 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220516225533-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5e78799ab05bbb77b2f41ee3f6c8279b55a06cf6e9bf90471d8d5a92d060cd24 (br-5e78799ab05b): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220516225533-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5e78799ab05bbb77b2f41ee3f6c8279b55a06cf6e9bf90471d8d5a92d060cd24 (br-5e78799ab05b): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:55:52.765639    8296 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:55:53.915207    8296 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1495588s)
	I0516 22:55:53.924973    8296 cli_runner.go:164] Run: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:55:55.112176    8296 cli_runner.go:211] docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:55:55.112176    8296 cli_runner.go:217] Completed: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: (1.187193s)
	I0516 22:55:55.112176    8296 client.go:171] LocalClient.Create took 10.4557459s
	I0516 22:55:57.134133    8296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:55:57.143817    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:55:58.229569    8296 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:55:58.229569    8296 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0847508s)
	I0516 22:55:58.229569    8296 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:55:58.525975    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:55:59.648770    8296 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:55:59.648857    8296 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.1225318s)
	W0516 22:55:59.649011    8296 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	W0516 22:55:59.649062    8296 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:55:59.662459    8296 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:55:59.671801    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:56:00.782058    8296 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:56:00.782058    8296 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.110188s)
	I0516 22:56:00.782372    8296 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:01.094046    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:56:02.187426    8296 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:56:02.187546    8296 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0933254s)
	W0516 22:56:02.187717    8296 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	W0516 22:56:02.187786    8296 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:02.187786    8296 start.go:134] duration metric: createHost completed in 17.5363668s
	I0516 22:56:02.187786    8296 start.go:81] releasing machines lock for "old-k8s-version-20220516225533-2444", held for 17.536737s
	W0516 22:56:02.187998    8296 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220516225533-2444 container: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220516225533-2444: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220516225533-2444': mkdir /var/lib/docker/volumes/old-k8s-version-20220516225533-2444: read-only file system
	I0516 22:56:02.210259    8296 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:56:03.287475    8296 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:03.287475    8296 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0771603s)
	I0516 22:56:03.287475    8296 delete.go:82] Unable to get host status for old-k8s-version-20220516225533-2444, assuming it has already been deleted: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	W0516 22:56:03.287475    8296 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220516225533-2444 container: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220516225533-2444: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220516225533-2444': mkdir /var/lib/docker/volumes/old-k8s-version-20220516225533-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220516225533-2444 container: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220516225533-2444: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220516225533-2444': mkdir /var/lib/docker/volumes/old-k8s-version-20220516225533-2444: read-only file system
	
	I0516 22:56:03.287475    8296 start.go:623] Will try again in 5 seconds ...
	I0516 22:56:08.301554    8296 start.go:352] acquiring machines lock for old-k8s-version-20220516225533-2444: {Name:mk5023de8a7eabf3a3502247916ec67ae4aced29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:56:08.301980    8296 start.go:356] acquired machines lock for "old-k8s-version-20220516225533-2444" in 182.6µs
	I0516 22:56:08.302081    8296 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:56:08.302081    8296 fix.go:55] fixHost starting: 
	I0516 22:56:08.320456    8296 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:56:09.482186    8296 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:09.482335    8296 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.1615285s)
	I0516 22:56:09.482436    8296 fix.go:103] recreateIfNeeded on old-k8s-version-20220516225533-2444: state= err=unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:09.482436    8296 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:56:09.488304    8296 out.go:177] * docker "old-k8s-version-20220516225533-2444" container is missing, will recreate.
	I0516 22:56:09.490292    8296 delete.go:124] DEMOLISHING old-k8s-version-20220516225533-2444 ...
	I0516 22:56:09.504303    8296 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:56:10.591769    8296 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:10.591815    8296 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0872816s)
	W0516 22:56:10.591911    8296 stop.go:75] unable to get state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:10.591911    8296 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:10.608654    8296 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:56:11.750770    8296 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:11.750770    8296 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.1421068s)
	I0516 22:56:11.750770    8296 delete.go:82] Unable to get host status for old-k8s-version-20220516225533-2444, assuming it has already been deleted: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:11.757771    8296 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444
	W0516 22:56:12.888076    8296 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:56:12.888076    8296 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444: (1.1302947s)
	I0516 22:56:12.888076    8296 kic.go:356] could not find the container old-k8s-version-20220516225533-2444 to remove it. will try anyways
	I0516 22:56:12.899566    8296 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:56:14.028097    8296 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:14.028319    8296 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.1285216s)
	W0516 22:56:14.028413    8296 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:14.036857    8296 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-20220516225533-2444 /bin/bash -c "sudo init 0"
	W0516 22:56:15.159718    8296 cli_runner.go:211] docker exec --privileged -t old-k8s-version-20220516225533-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:56:15.159718    8296 cli_runner.go:217] Completed: docker exec --privileged -t old-k8s-version-20220516225533-2444 /bin/bash -c "sudo init 0": (1.1226941s)
	I0516 22:56:15.159718    8296 oci.go:641] error shutdown old-k8s-version-20220516225533-2444: docker exec --privileged -t old-k8s-version-20220516225533-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:16.180788    8296 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:56:17.298188    8296 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:17.298188    8296 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.1173231s)
	I0516 22:56:17.298188    8296 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:17.298188    8296 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:56:17.298188    8296 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:17.773861    8296 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:56:18.871019    8296 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:18.871019    8296 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0968978s)
	I0516 22:56:18.871019    8296 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:18.871019    8296 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:56:18.871019    8296 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:19.793494    8296 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:56:20.924296    8296 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:20.924381    8296 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.1304252s)
	I0516 22:56:20.924405    8296 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:20.924405    8296 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:56:20.924494    8296 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:21.585498    8296 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:56:22.662888    8296 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:22.662935    8296 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0771829s)
	I0516 22:56:22.663001    8296 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:22.663045    8296 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:56:22.663076    8296 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:23.782041    8296 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:56:24.877653    8296 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:24.877653    8296 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.095603s)
	I0516 22:56:24.877653    8296 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:24.877653    8296 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:56:24.877653    8296 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:26.407595    8296 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:56:27.562306    8296 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:27.562306    8296 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.1544185s)
	I0516 22:56:27.562306    8296 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:27.562306    8296 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:56:27.562306    8296 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:30.626200    8296 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:56:31.685483    8296 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:31.685708    8296 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0583037s)
	I0516 22:56:31.685774    8296 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:31.685774    8296 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:56:31.685774    8296 oci.go:88] couldn't shut down old-k8s-version-20220516225533-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	 
	I0516 22:56:31.694520    8296 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-20220516225533-2444
	I0516 22:56:32.817771    8296 cli_runner.go:217] Completed: docker rm -f -v old-k8s-version-20220516225533-2444: (1.1232419s)
	I0516 22:56:32.824764    8296 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444
	W0516 22:56:33.940871    8296 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:56:33.940871    8296 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444: (1.1150769s)
	I0516 22:56:33.947772    8296 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:56:35.018175    8296 cli_runner.go:211] docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:56:35.018175    8296 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0703936s)
	I0516 22:56:35.027000    8296 network_create.go:272] running [docker network inspect old-k8s-version-20220516225533-2444] to gather additional debugging logs...
	I0516 22:56:35.027000    8296 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220516225533-2444
	W0516 22:56:36.106982    8296 cli_runner.go:211] docker network inspect old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:56:36.106982    8296 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220516225533-2444: (1.0799729s)
	I0516 22:56:36.106982    8296 network_create.go:275] error running [docker network inspect old-k8s-version-20220516225533-2444]: docker network inspect old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220516225533-2444
	I0516 22:56:36.106982    8296 network_create.go:277] output of [docker network inspect old-k8s-version-20220516225533-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220516225533-2444
	
	** /stderr **
	W0516 22:56:36.107982    8296 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:56:36.107982    8296 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:56:37.118308    8296 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:56:37.122345    8296 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:56:37.122345    8296 start.go:165] libmachine.API.Create for "old-k8s-version-20220516225533-2444" (driver="docker")
	I0516 22:56:37.122345    8296 client.go:168] LocalClient.Create starting
	I0516 22:56:37.123022    8296 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:56:37.123689    8296 main.go:134] libmachine: Decoding PEM data...
	I0516 22:56:37.123689    8296 main.go:134] libmachine: Parsing certificate...
	I0516 22:56:37.123689    8296 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:56:37.123689    8296 main.go:134] libmachine: Decoding PEM data...
	I0516 22:56:37.123689    8296 main.go:134] libmachine: Parsing certificate...
	I0516 22:56:37.135065    8296 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:56:38.269793    8296 cli_runner.go:211] docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:56:38.269793    8296 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1341512s)
	I0516 22:56:38.290274    8296 network_create.go:272] running [docker network inspect old-k8s-version-20220516225533-2444] to gather additional debugging logs...
	I0516 22:56:38.290274    8296 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220516225533-2444
	W0516 22:56:39.348830    8296 cli_runner.go:211] docker network inspect old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:56:39.349152    8296 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220516225533-2444: (1.0585467s)
	I0516 22:56:39.349152    8296 network_create.go:275] error running [docker network inspect old-k8s-version-20220516225533-2444]: docker network inspect old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220516225533-2444
	I0516 22:56:39.349152    8296 network_create.go:277] output of [docker network inspect old-k8s-version-20220516225533-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220516225533-2444
	
	** /stderr **
	I0516 22:56:39.357390    8296 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:56:40.436720    8296 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0791214s)
	I0516 22:56:40.454492    8296 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000698130] amended:true}} dirty:map[192.168.49.0:0xc000698130 192.168.58.0:0xc0000063c0 192.168.67.0:0xc000698280 192.168.76.0:0xc000698318] misses:2}
	I0516 22:56:40.454557    8296 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:40.473509    8296 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000698130] amended:true}} dirty:map[192.168.49.0:0xc000698130 192.168.58.0:0xc0000063c0 192.168.67.0:0xc000698280 192.168.76.0:0xc000698318] misses:3}
	I0516 22:56:40.473509    8296 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:40.489491    8296 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000698130 192.168.58.0:0xc0000063c0 192.168.67.0:0xc000698280 192.168.76.0:0xc000698318] amended:false}} dirty:map[] misses:0}
	I0516 22:56:40.490001    8296 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:40.508041    8296 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000698130 192.168.58.0:0xc0000063c0 192.168.67.0:0xc000698280 192.168.76.0:0xc000698318] amended:false}} dirty:map[] misses:0}
	I0516 22:56:40.508041    8296 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:40.525023    8296 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000698130 192.168.58.0:0xc0000063c0 192.168.67.0:0xc000698280 192.168.76.0:0xc000698318] amended:true}} dirty:map[192.168.49.0:0xc000698130 192.168.58.0:0xc0000063c0 192.168.67.0:0xc000698280 192.168.76.0:0xc000698318 192.168.85.0:0xc0005245f0] misses:0}
	I0516 22:56:40.525023    8296 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:40.525023    8296 network_create.go:115] attempt to create docker network old-k8s-version-20220516225533-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:56:40.536684    8296 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444
	W0516 22:56:41.641135    8296 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:56:41.641411    8296 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: (1.1044419s)
	E0516 22:56:41.641584    8296 network_create.go:104] error while trying to create docker network old-k8s-version-20220516225533-2444 192.168.85.0/24: create docker network old-k8s-version-20220516225533-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dc044452646f119c578c0af112093f1c79c8a6bed8e5e5980151c2941829166e (br-dc044452646f): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:56:41.641895    8296 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220516225533-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dc044452646f119c578c0af112093f1c79c8a6bed8e5e5980151c2941829166e (br-dc044452646f): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220516225533-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dc044452646f119c578c0af112093f1c79c8a6bed8e5e5980151c2941829166e (br-dc044452646f): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:56:41.658030    8296 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:56:42.747103    8296 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0889237s)
	I0516 22:56:42.758261    8296 cli_runner.go:164] Run: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:56:43.847194    8296 cli_runner.go:211] docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:56:43.847265    8296 cli_runner.go:217] Completed: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0887181s)
	I0516 22:56:43.847434    8296 client.go:171] LocalClient.Create took 6.7250314s
	I0516 22:56:45.867607    8296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:56:45.875651    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:56:46.942700    8296 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:56:46.942962    8296 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0669672s)
	I0516 22:56:46.943395    8296 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:47.282483    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:56:48.348667    8296 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:56:48.348719    8296 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0653794s)
	W0516 22:56:48.349061    8296 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	W0516 22:56:48.349151    8296 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:48.361968    8296 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:56:48.374432    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:56:49.457434    8296 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:56:49.457434    8296 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0828125s)
	I0516 22:56:49.457434    8296 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:49.699612    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:56:50.750636    8296 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:56:50.750636    8296 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0510147s)
	W0516 22:56:50.750636    8296 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	W0516 22:56:50.750636    8296 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:50.750636    8296 start.go:134] duration metric: createHost completed in 13.632211s
	I0516 22:56:50.761643    8296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:56:50.767637    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:56:51.866225    8296 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:56:51.866444    8296 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.09741s)
	I0516 22:56:51.866444    8296 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:52.126754    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:56:53.238473    8296 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:56:53.238473    8296 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.1115152s)
	W0516 22:56:53.238473    8296 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	W0516 22:56:53.238473    8296 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:53.248320    8296 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:56:53.255063    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:56:54.347112    8296 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:56:54.347363    8296 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0920403s)
	I0516 22:56:54.347629    8296 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:54.564159    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:56:55.701129    8296 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:56:55.701129    8296 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.1369606s)
	W0516 22:56:55.701129    8296 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	W0516 22:56:55.701129    8296 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:56:55.701129    8296 fix.go:57] fixHost completed within 47.3986416s
	I0516 22:56:55.701129    8296 start.go:81] releasing machines lock for "old-k8s-version-20220516225533-2444", held for 47.3986416s
	W0516 22:56:55.702460    8296 out.go:239] * Failed to start docker container. Running "minikube delete -p old-k8s-version-20220516225533-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220516225533-2444 container: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220516225533-2444: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220516225533-2444': mkdir /var/lib/docker/volumes/old-k8s-version-20220516225533-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-20220516225533-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220516225533-2444 container: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220516225533-2444: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220516225533-2444': mkdir /var/lib/docker/volumes/old-k8s-version-20220516225533-2444: read-only file system
	
	I0516 22:56:55.707892    8296 out.go:177] 
	W0516 22:56:55.710595    8296 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220516225533-2444 container: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220516225533-2444: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220516225533-2444': mkdir /var/lib/docker/volumes/old-k8s-version-20220516225533-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220516225533-2444 container: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220516225533-2444: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220516225533-2444': mkdir /var/lib/docker/volumes/old-k8s-version-20220516225533-2444: read-only file system
	
	W0516 22:56:55.710807    8296 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 22:56:55.710807    8296 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 22:56:55.714814    8296 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p old-k8s-version-20220516225533-2444 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220516225533-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220516225533-2444: exit status 1 (1.2121023s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444: exit status 7 (2.8742703s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:56:59.886567    6456 status.go:247] status error: host: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220516225533-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (86.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (85.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220516225557-2444 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-20220516225557-2444 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m20.8233918s)

                                                
                                                
-- stdout --
	* [no-preload-20220516225557-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node no-preload-20220516225557-2444 in cluster no-preload-20220516225557-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "no-preload-20220516225557-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:55:58.110534    8948 out.go:296] Setting OutFile to fd 1652 ...
	I0516 22:55:58.173661    8948 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:55:58.173661    8948 out.go:309] Setting ErrFile to fd 1988...
	I0516 22:55:58.173661    8948 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:55:58.189350    8948 out.go:303] Setting JSON to false
	I0516 22:55:58.193479    8948 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4870,"bootTime":1652736888,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:55:58.193518    8948 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:55:58.201107    8948 out.go:177] * [no-preload-20220516225557-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:55:58.205102    8948 notify.go:193] Checking for updates...
	I0516 22:55:58.208034    8948 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:55:58.210930    8948 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:55:58.213573    8948 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:55:58.216402    8948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:55:58.220772    8948 config.go:178] Loaded profile config "cert-expiration-20220516225440-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:55:58.221557    8948 config.go:178] Loaded profile config "cert-options-20220516225447-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:55:58.222174    8948 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:55:58.222979    8948 config.go:178] Loaded profile config "old-k8s-version-20220516225533-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0516 22:55:58.223103    8948 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:56:00.983677    8948 docker.go:137] docker version: linux-20.10.14
	I0516 22:56:00.991677    8948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:56:03.147241    8948 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1555453s)
	I0516 22:56:03.148702    8948 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:56:02.0963824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:56:03.167360    8948 out.go:177] * Using the docker driver based on user configuration
	I0516 22:56:03.169469    8948 start.go:284] selected driver: docker
	I0516 22:56:03.169469    8948 start.go:806] validating driver "docker" against <nil>
	I0516 22:56:03.169838    8948 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:56:03.243317    8948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:56:05.384517    8948 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1411825s)
	I0516 22:56:05.384905    8948 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:56:04.3042243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:56:05.384905    8948 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 22:56:05.386123    8948 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 22:56:05.389782    8948 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 22:56:05.391971    8948 cni.go:95] Creating CNI manager for ""
	I0516 22:56:05.391971    8948 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 22:56:05.391971    8948 start_flags.go:306] config:
	{Name:no-preload-20220516225557-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220516225557-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:56:05.394174    8948 out.go:177] * Starting control plane node no-preload-20220516225557-2444 in cluster no-preload-20220516225557-2444
	I0516 22:56:05.397766    8948 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:56:05.400698    8948 out.go:177] * Pulling base image ...
	I0516 22:56:05.402274    8948 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 22:56:05.403252    8948 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:56:05.403426    8948 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-20220516225557-2444\config.json ...
	I0516 22:56:05.403426    8948 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6
	I0516 22:56:05.403426    8948 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6
	I0516 22:56:05.403426    8948 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0516 22:56:05.403580    8948 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd:3.5.1-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0
	I0516 22:56:05.403659    8948 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6
	I0516 22:56:05.403659    8948 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-20220516225557-2444\config.json: {Name:mk936f18020f21cbe1c18a920d0aedb46bf5d68a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 22:56:05.403659    8948 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns:v1.8.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6
	I0516 22:56:05.403659    8948 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6
	I0516 22:56:05.403835    8948 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause:3.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6
	I0516 22:56:05.588948    8948 cache.go:107] acquiring lock: {Name:mk1cf2f2eee53b81f1c95945c2dd3783d0c7d992 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:56:05.589028    8948 cache.go:107] acquiring lock: {Name:mkb7d2f7b32c5276784ba454e50c746d7fc6c05f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:56:05.589028    8948 cache.go:107] acquiring lock: {Name:mk90a34f529b9ea089d74e18a271c58e34606f29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:56:05.589230    8948 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6 exists
	I0516 22:56:05.589230    8948 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6 exists
	I0516 22:56:05.589230    8948 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6 exists
	I0516 22:56:05.589230    8948 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\pause_3.6" took 185.2622ms
	I0516 22:56:05.589230    8948 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6 succeeded
	I0516 22:56:05.589230    8948 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-apiserver_v1.23.6" took 185.5691ms
	I0516 22:56:05.589230    8948 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6 succeeded
	I0516 22:56:05.589230    8948 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-scheduler_v1.23.6" took 185.3931ms
	I0516 22:56:05.589766    8948 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6 succeeded
	I0516 22:56:05.602591    8948 cache.go:107] acquiring lock: {Name:mk93ccdec90972c05247bea23df9b97c54ef0291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:56:05.602591    8948 cache.go:107] acquiring lock: {Name:mka0a7f9fce0e132e7529c42bed359c919fc231b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:56:05.602591    8948 cache.go:107] acquiring lock: {Name:mk9255ee8c390126b963cceac501a1fcc40ecb6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:56:05.602796    8948 cache.go:107] acquiring lock: {Name:mk3772b9dcb36c3cbc3aa4dfbe66c5266092e2c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:56:05.602889    8948 cache.go:107] acquiring lock: {Name:mk40b809628c4e9673e2a41bf9fb31b8a6b3529d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:56:05.602889    8948 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0516 22:56:05.602889    8948 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6 exists
	I0516 22:56:05.603000    8948 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 199.4185ms
	I0516 22:56:05.603000    8948 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6 exists
	I0516 22:56:05.603123    8948 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0516 22:56:05.603000    8948 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\coredns\\coredns_v1.8.6" took 199.3397ms
	I0516 22:56:05.603123    8948 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0 exists
	I0516 22:56:05.603123    8948 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6 succeeded
	I0516 22:56:05.603209    8948 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6 exists
	I0516 22:56:05.603209    8948 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\etcd_3.5.1-0" took 199.5482ms
	I0516 22:56:05.603209    8948 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0 succeeded
	I0516 22:56:05.603209    8948 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-proxy_v1.23.6" took 199.7809ms
	I0516 22:56:05.603389    8948 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6 succeeded
	I0516 22:56:05.603466    8948 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-controller-manager_v1.23.6" took 199.9613ms
	I0516 22:56:05.603466    8948 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6 succeeded
	I0516 22:56:05.603466    8948 cache.go:87] Successfully saved all images to host disk.
	I0516 22:56:06.549270    8948 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:56:06.549270    8948 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:56:06.549270    8948 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:56:06.549270    8948 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:56:06.549270    8948 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:56:06.549270    8948 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:56:06.549270    8948 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:56:06.549270    8948 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:56:06.549270    8948 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:56:08.875816    8948 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:56:08.875872    8948 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:56:08.875967    8948 start.go:352] acquiring machines lock for no-preload-20220516225557-2444: {Name:mkb26cae446bfb2d0e92a0ecbe26357c6ab2d349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:56:08.875967    8948 start.go:356] acquired machines lock for "no-preload-20220516225557-2444" in 0s
	I0516 22:56:08.875967    8948 start.go:91] Provisioning new machine with config: &{Name:no-preload-20220516225557-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220516225557-2444 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 22:56:08.876525    8948 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:56:08.904047    8948 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:56:08.905057    8948 start.go:165] libmachine.API.Create for "no-preload-20220516225557-2444" (driver="docker")
	I0516 22:56:08.905173    8948 client.go:168] LocalClient.Create starting
	I0516 22:56:08.905516    8948 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:56:08.905516    8948 main.go:134] libmachine: Decoding PEM data...
	I0516 22:56:08.905516    8948 main.go:134] libmachine: Parsing certificate...
	I0516 22:56:08.905516    8948 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:56:08.905516    8948 main.go:134] libmachine: Decoding PEM data...
	I0516 22:56:08.905516    8948 main.go:134] libmachine: Parsing certificate...
	I0516 22:56:08.916028    8948 cli_runner.go:164] Run: docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:56:10.021305    8948 cli_runner.go:211] docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:56:10.021305    8948 cli_runner.go:217] Completed: docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1052675s)
	I0516 22:56:10.032678    8948 network_create.go:272] running [docker network inspect no-preload-20220516225557-2444] to gather additional debugging logs...
	I0516 22:56:10.032762    8948 cli_runner.go:164] Run: docker network inspect no-preload-20220516225557-2444
	W0516 22:56:11.108410    8948 cli_runner.go:211] docker network inspect no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:56:11.108410    8948 cli_runner.go:217] Completed: docker network inspect no-preload-20220516225557-2444: (1.0756386s)
	I0516 22:56:11.108410    8948 network_create.go:275] error running [docker network inspect no-preload-20220516225557-2444]: docker network inspect no-preload-20220516225557-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220516225557-2444
	I0516 22:56:11.108410    8948 network_create.go:277] output of [docker network inspect no-preload-20220516225557-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220516225557-2444
	
	** /stderr **
	I0516 22:56:11.118412    8948 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:56:12.209175    8948 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0907538s)
	I0516 22:56:12.231119    8948 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0012160c0] misses:0}
	I0516 22:56:12.232288    8948 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:12.232323    8948 network_create.go:115] attempt to create docker network no-preload-20220516225557-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:56:12.241553    8948 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444
	W0516 22:56:13.337922    8948 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:56:13.338167    8948 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: (1.0962836s)
	W0516 22:56:13.338222    8948 network_create.go:107] failed to create docker network no-preload-20220516225557-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:56:13.360317    8948 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0012160c0] amended:false}} dirty:map[] misses:0}
	I0516 22:56:13.360317    8948 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:13.382661    8948 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0012160c0] amended:true}} dirty:map[192.168.49.0:0xc0012160c0 192.168.58.0:0xc0010b02d8] misses:0}
	I0516 22:56:13.382661    8948 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:13.382661    8948 network_create.go:115] attempt to create docker network no-preload-20220516225557-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:56:13.392397    8948 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444
	W0516 22:56:14.451050    8948 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:56:14.451113    8948 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: (1.0585792s)
	W0516 22:56:14.451188    8948 network_create.go:107] failed to create docker network no-preload-20220516225557-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:56:14.471150    8948 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0012160c0] amended:true}} dirty:map[192.168.49.0:0xc0012160c0 192.168.58.0:0xc0010b02d8] misses:1}
	I0516 22:56:14.471150    8948 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:14.488215    8948 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0012160c0] amended:true}} dirty:map[192.168.49.0:0xc0012160c0 192.168.58.0:0xc0010b02d8 192.168.67.0:0xc0010b0370] misses:1}
	I0516 22:56:14.488215    8948 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:14.488215    8948 network_create.go:115] attempt to create docker network no-preload-20220516225557-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:56:14.495272    8948 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444
	W0516 22:56:15.554754    8948 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:56:15.554827    8948 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: (1.0592989s)
	W0516 22:56:15.554876    8948 network_create.go:107] failed to create docker network no-preload-20220516225557-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:56:15.573526    8948 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0012160c0] amended:true}} dirty:map[192.168.49.0:0xc0012160c0 192.168.58.0:0xc0010b02d8 192.168.67.0:0xc0010b0370] misses:2}
	I0516 22:56:15.574518    8948 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:15.591887    8948 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0012160c0] amended:true}} dirty:map[192.168.49.0:0xc0012160c0 192.168.58.0:0xc0010b02d8 192.168.67.0:0xc0010b0370 192.168.76.0:0xc001216160] misses:2}
	I0516 22:56:15.591887    8948 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:15.592867    8948 network_create.go:115] attempt to create docker network no-preload-20220516225557-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:56:15.599968    8948 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444
	W0516 22:56:16.699757    8948 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:56:16.699803    8948 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: (1.0995798s)
	E0516 22:56:16.699874    8948 network_create.go:104] error while trying to create docker network no-preload-20220516225557-2444 192.168.76.0/24: create docker network no-preload-20220516225557-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d7bf6ae800ce0ed8ac92ee9a51ec53475ad1ad382b193f1a8681c338abed1026 (br-d7bf6ae800ce): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:56:16.700142    8948 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220516225557-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d7bf6ae800ce0ed8ac92ee9a51ec53475ad1ad382b193f1a8681c338abed1026 (br-d7bf6ae800ce): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220516225557-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d7bf6ae800ce0ed8ac92ee9a51ec53475ad1ad382b193f1a8681c338abed1026 (br-d7bf6ae800ce): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:56:16.715970    8948 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:56:17.810111    8948 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0940937s)
	I0516 22:56:17.820395    8948 cli_runner.go:164] Run: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:56:18.917159    8948 cli_runner.go:211] docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:56:18.917204    8948 cli_runner.go:217] Completed: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0965693s)
	I0516 22:56:18.917301    8948 client.go:171] LocalClient.Create took 10.0120426s
	I0516 22:56:20.936381    8948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:56:20.943407    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:56:22.013001    8948 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:56:22.013127    8948 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0638969s)
	I0516 22:56:22.013366    8948 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:22.305940    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:56:23.458323    8948 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:56:23.458388    8948 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.1523336s)
	W0516 22:56:23.458446    8948 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	W0516 22:56:23.458446    8948 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:23.471160    8948 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:56:23.478366    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:56:24.564634    8948 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:56:24.564634    8948 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0862578s)
	I0516 22:56:24.564634    8948 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:24.873200    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:56:25.935370    8948 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:56:25.935425    8948 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0619532s)
	W0516 22:56:25.935689    8948 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	W0516 22:56:25.935744    8948 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:25.935744    8948 start.go:134] duration metric: createHost completed in 17.059006s
	I0516 22:56:25.935744    8948 start.go:81] releasing machines lock for "no-preload-20220516225557-2444", held for 17.0596305s
	W0516 22:56:25.935744    8948 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for no-preload-20220516225557-2444 container: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220516225557-2444: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220516225557-2444': mkdir /var/lib/docker/volumes/no-preload-20220516225557-2444: read-only file system
	I0516 22:56:25.952958    8948 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:56:27.053967    8948 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:27.054074    8948 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.1007397s)
	I0516 22:56:27.054121    8948 delete.go:82] Unable to get host status for no-preload-20220516225557-2444, assuming it has already been deleted: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	W0516 22:56:27.054482    8948 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for no-preload-20220516225557-2444 container: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220516225557-2444: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220516225557-2444': mkdir /var/lib/docker/volumes/no-preload-20220516225557-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for no-preload-20220516225557-2444 container: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220516225557-2444: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220516225557-2444': mkdir /var/lib/docker/volumes/no-preload-20220516225557-2444: read-only file system
	
	I0516 22:56:27.054538    8948 start.go:623] Will try again in 5 seconds ...
	I0516 22:56:32.056564    8948 start.go:352] acquiring machines lock for no-preload-20220516225557-2444: {Name:mkb26cae446bfb2d0e92a0ecbe26357c6ab2d349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:56:32.056564    8948 start.go:356] acquired machines lock for "no-preload-20220516225557-2444" in 0s
	I0516 22:56:32.056564    8948 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:56:32.057118    8948 fix.go:55] fixHost starting: 
	I0516 22:56:32.071988    8948 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:56:33.149060    8948 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:33.149060    8948 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0770629s)
	I0516 22:56:33.149060    8948 fix.go:103] recreateIfNeeded on no-preload-20220516225557-2444: state= err=unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:33.149060    8948 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:56:33.154070    8948 out.go:177] * docker "no-preload-20220516225557-2444" container is missing, will recreate.
	I0516 22:56:33.156071    8948 delete.go:124] DEMOLISHING no-preload-20220516225557-2444 ...
	I0516 22:56:33.171062    8948 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:56:34.255449    8948 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:34.255449    8948 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0843782s)
	W0516 22:56:34.255449    8948 stop.go:75] unable to get state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:34.255449    8948 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:34.270432    8948 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:56:35.377119    8948 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:35.377119    8948 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.10647s)
	I0516 22:56:35.377119    8948 delete.go:82] Unable to get host status for no-preload-20220516225557-2444, assuming it has already been deleted: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:35.385698    8948 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220516225557-2444
	W0516 22:56:36.459311    8948 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:56:36.459311    8948 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220516225557-2444: (1.0736032s)
	I0516 22:56:36.459311    8948 kic.go:356] could not find the container no-preload-20220516225557-2444 to remove it. will try anyways
	I0516 22:56:36.466311    8948 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:56:37.526126    8948 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:37.526302    8948 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0598058s)
	W0516 22:56:37.526302    8948 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:37.539184    8948 cli_runner.go:164] Run: docker exec --privileged -t no-preload-20220516225557-2444 /bin/bash -c "sudo init 0"
	W0516 22:56:38.598876    8948 cli_runner.go:211] docker exec --privileged -t no-preload-20220516225557-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:56:38.598876    8948 cli_runner.go:217] Completed: docker exec --privileged -t no-preload-20220516225557-2444 /bin/bash -c "sudo init 0": (1.059683s)
	I0516 22:56:38.598876    8948 oci.go:641] error shutdown no-preload-20220516225557-2444: docker exec --privileged -t no-preload-20220516225557-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:39.624655    8948 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:56:40.741734    8948 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:40.741777    8948 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.1168704s)
	I0516 22:56:40.741777    8948 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:40.741777    8948 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:56:40.741777    8948 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:41.225791    8948 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:56:42.322708    8948 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:42.322708    8948 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.096908s)
	I0516 22:56:42.322708    8948 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:42.322708    8948 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:56:42.322708    8948 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:43.232503    8948 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:56:44.298788    8948 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:44.298788    8948 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.066276s)
	I0516 22:56:44.298788    8948 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:44.298788    8948 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:56:44.298788    8948 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:44.943668    8948 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:56:46.012965    8948 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:46.012965    8948 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0691438s)
	I0516 22:56:46.012965    8948 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:46.012965    8948 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:56:46.012965    8948 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:47.143291    8948 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:56:48.209625    8948 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:48.209799    8948 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0661253s)
	I0516 22:56:48.209856    8948 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:48.209856    8948 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:56:48.209856    8948 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:49.730456    8948 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:56:50.782634    8948 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:50.782634    8948 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.052169s)
	I0516 22:56:50.782634    8948 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:50.782634    8948 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:56:50.782634    8948 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:53.839984    8948 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:56:54.947277    8948 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:54.947354    8948 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.1071492s)
	I0516 22:56:54.947354    8948 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:56:54.947354    8948 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:56:54.947354    8948 oci.go:88] couldn't shut down no-preload-20220516225557-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	 
	I0516 22:56:54.955317    8948 cli_runner.go:164] Run: docker rm -f -v no-preload-20220516225557-2444
	I0516 22:56:56.044188    8948 cli_runner.go:217] Completed: docker rm -f -v no-preload-20220516225557-2444: (1.0888279s)
	I0516 22:56:56.056611    8948 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220516225557-2444
	W0516 22:56:57.149803    8948 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:56:57.149803    8948 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220516225557-2444: (1.0931829s)
	I0516 22:56:57.158288    8948 cli_runner.go:164] Run: docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:56:58.229849    8948 cli_runner.go:211] docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:56:58.229901    8948 cli_runner.go:217] Completed: docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0714691s)
	I0516 22:56:58.237313    8948 network_create.go:272] running [docker network inspect no-preload-20220516225557-2444] to gather additional debugging logs...
	I0516 22:56:58.237313    8948 cli_runner.go:164] Run: docker network inspect no-preload-20220516225557-2444
	W0516 22:56:59.302255    8948 cli_runner.go:211] docker network inspect no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:56:59.302411    8948 cli_runner.go:217] Completed: docker network inspect no-preload-20220516225557-2444: (1.0648749s)
	I0516 22:56:59.302443    8948 network_create.go:275] error running [docker network inspect no-preload-20220516225557-2444]: docker network inspect no-preload-20220516225557-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220516225557-2444
	I0516 22:56:59.302491    8948 network_create.go:277] output of [docker network inspect no-preload-20220516225557-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220516225557-2444
	
	** /stderr **
	W0516 22:56:59.303413    8948 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:56:59.303413    8948 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:57:00.308822    8948 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:57:00.312748    8948 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:57:00.312748    8948 start.go:165] libmachine.API.Create for "no-preload-20220516225557-2444" (driver="docker")
	I0516 22:57:00.312748    8948 client.go:168] LocalClient.Create starting
	I0516 22:57:00.313542    8948 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:57:00.313542    8948 main.go:134] libmachine: Decoding PEM data...
	I0516 22:57:00.314089    8948 main.go:134] libmachine: Parsing certificate...
	I0516 22:57:00.314288    8948 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:57:00.314288    8948 main.go:134] libmachine: Decoding PEM data...
	I0516 22:57:00.314288    8948 main.go:134] libmachine: Parsing certificate...
	I0516 22:57:00.325412    8948 cli_runner.go:164] Run: docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:57:01.415319    8948 cli_runner.go:211] docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:57:01.415319    8948 cli_runner.go:217] Completed: docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0898498s)
	I0516 22:57:01.424302    8948 network_create.go:272] running [docker network inspect no-preload-20220516225557-2444] to gather additional debugging logs...
	I0516 22:57:01.424302    8948 cli_runner.go:164] Run: docker network inspect no-preload-20220516225557-2444
	W0516 22:57:02.481229    8948 cli_runner.go:211] docker network inspect no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:57:02.481229    8948 cli_runner.go:217] Completed: docker network inspect no-preload-20220516225557-2444: (1.0569183s)
	I0516 22:57:02.481229    8948 network_create.go:275] error running [docker network inspect no-preload-20220516225557-2444]: docker network inspect no-preload-20220516225557-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220516225557-2444
	I0516 22:57:02.481229    8948 network_create.go:277] output of [docker network inspect no-preload-20220516225557-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220516225557-2444
	
	** /stderr **
	I0516 22:57:02.489922    8948 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:57:03.572000    8948 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0811882s)
	I0516 22:57:03.589169    8948 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0012160c0] amended:true}} dirty:map[192.168.49.0:0xc0012160c0 192.168.58.0:0xc0010b02d8 192.168.67.0:0xc0010b0370 192.168.76.0:0xc001216160] misses:2}
	I0516 22:57:03.589169    8948 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:57:03.607979    8948 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0012160c0] amended:true}} dirty:map[192.168.49.0:0xc0012160c0 192.168.58.0:0xc0010b02d8 192.168.67.0:0xc0010b0370 192.168.76.0:0xc001216160] misses:3}
	I0516 22:57:03.608054    8948 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:57:03.631314    8948 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0012160c0 192.168.58.0:0xc0010b02d8 192.168.67.0:0xc0010b0370 192.168.76.0:0xc001216160] amended:false}} dirty:map[] misses:0}
	I0516 22:57:03.631314    8948 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:57:03.647406    8948 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0012160c0 192.168.58.0:0xc0010b02d8 192.168.67.0:0xc0010b0370 192.168.76.0:0xc001216160] amended:false}} dirty:map[] misses:0}
	I0516 22:57:03.647406    8948 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:57:03.662657    8948 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0012160c0 192.168.58.0:0xc0010b02d8 192.168.67.0:0xc0010b0370 192.168.76.0:0xc001216160] amended:true}} dirty:map[192.168.49.0:0xc0012160c0 192.168.58.0:0xc0010b02d8 192.168.67.0:0xc0010b0370 192.168.76.0:0xc001216160 192.168.85.0:0xc0010b01e8] misses:0}
	I0516 22:57:03.662657    8948 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:57:03.662657    8948 network_create.go:115] attempt to create docker network no-preload-20220516225557-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:57:03.672979    8948 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444
	W0516 22:57:04.782443    8948 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:57:04.782499    8948 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: (1.1093022s)
	E0516 22:57:04.782499    8948 network_create.go:104] error while trying to create docker network no-preload-20220516225557-2444 192.168.85.0/24: create docker network no-preload-20220516225557-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e21e91075179639f5692ce986956ea0ec0f5cab2a8fdc18297c9d60be1197928 (br-e21e91075179): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:57:04.782499    8948 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220516225557-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e21e91075179639f5692ce986956ea0ec0f5cab2a8fdc18297c9d60be1197928 (br-e21e91075179): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220516225557-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e21e91075179639f5692ce986956ea0ec0f5cab2a8fdc18297c9d60be1197928 (br-e21e91075179): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:57:04.799518    8948 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:57:05.924523    8948 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1248063s)
	I0516 22:57:05.933265    8948 cli_runner.go:164] Run: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:57:07.030476    8948 cli_runner.go:211] docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:57:07.030523    8948 cli_runner.go:217] Completed: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0970575s)
	I0516 22:57:07.030570    8948 client.go:171] LocalClient.Create took 6.7177651s
	I0516 22:57:09.054025    8948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:57:09.060488    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:57:10.135995    8948 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:57:10.136150    8948 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0754981s)
	I0516 22:57:10.136322    8948 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:10.476404    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:57:11.529712    8948 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:57:11.529712    8948 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0532994s)
	W0516 22:57:11.529712    8948 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	W0516 22:57:11.529712    8948 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:11.539672    8948 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:57:11.547660    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:57:12.602486    8948 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:57:12.602486    8948 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0548178s)
	I0516 22:57:12.602486    8948 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:12.832046    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:57:13.885338    8948 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:57:13.885391    8948 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0526928s)
	W0516 22:57:13.885588    8948 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	W0516 22:57:13.885635    8948 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:13.885680    8948 start.go:134] duration metric: createHost completed in 13.5766894s
	I0516 22:57:13.891332    8948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:57:13.900406    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:57:14.958892    8948 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:57:14.958892    8948 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0584767s)
	I0516 22:57:14.959276    8948 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:15.220573    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:57:16.301850    8948 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:57:16.301850    8948 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0812679s)
	W0516 22:57:16.301850    8948 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	W0516 22:57:16.301850    8948 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:16.311256    8948 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:57:16.318250    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:57:17.367915    8948 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:57:17.367915    8948 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0496565s)
	I0516 22:57:17.367915    8948 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:17.588289    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:57:18.648709    8948 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:57:18.648709    8948 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0604111s)
	W0516 22:57:18.648709    8948 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	W0516 22:57:18.648709    8948 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:18.648709    8948 fix.go:57] fixHost completed within 46.5911934s
	I0516 22:57:18.648709    8948 start.go:81] releasing machines lock for "no-preload-20220516225557-2444", held for 46.591748s
	W0516 22:57:18.649620    8948 out.go:239] * Failed to start docker container. Running "minikube delete -p no-preload-20220516225557-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220516225557-2444 container: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220516225557-2444: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220516225557-2444': mkdir /var/lib/docker/volumes/no-preload-20220516225557-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p no-preload-20220516225557-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220516225557-2444 container: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220516225557-2444: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220516225557-2444': mkdir /var/lib/docker/volumes/no-preload-20220516225557-2444: read-only file system
	
	I0516 22:57:18.654114    8948 out.go:177] 
	W0516 22:57:18.656353    8948 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220516225557-2444 container: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220516225557-2444: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220516225557-2444': mkdir /var/lib/docker/volumes/no-preload-20220516225557-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220516225557-2444 container: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220516225557-2444: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220516225557-2444': mkdir /var/lib/docker/volumes/no-preload-20220516225557-2444: read-only file system
	
	W0516 22:57:18.656353    8948 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 22:57:18.656353    8948 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 22:57:18.660076    8948 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p no-preload-20220516225557-2444 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220516225557-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220516225557-2444: exit status 1 (1.1884692s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444: exit status 7 (2.8693383s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:57:22.867269    6536 status.go:247] status error: host: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220516225557-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (85.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220516225628-2444 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p embed-certs-20220516225628-2444 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m20.5598082s)

                                                
                                                
-- stdout --
	* [embed-certs-20220516225628-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node embed-certs-20220516225628-2444 in cluster embed-certs-20220516225628-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "embed-certs-20220516225628-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:56:29.098443    8484 out.go:296] Setting OutFile to fd 1640 ...
	I0516 22:56:29.169517    8484 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:56:29.169517    8484 out.go:309] Setting ErrFile to fd 1412...
	I0516 22:56:29.169517    8484 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:56:29.182034    8484 out.go:303] Setting JSON to false
	I0516 22:56:29.184032    8484 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4901,"bootTime":1652736888,"procs":150,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:56:29.184032    8484 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:56:29.191038    8484 out.go:177] * [embed-certs-20220516225628-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:56:29.195076    8484 notify.go:193] Checking for updates...
	I0516 22:56:29.197037    8484 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:56:29.200043    8484 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:56:29.203041    8484 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:56:29.205084    8484 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:56:29.208040    8484 config.go:178] Loaded profile config "cert-expiration-20220516225440-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:56:29.209041    8484 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:56:29.209041    8484 config.go:178] Loaded profile config "no-preload-20220516225557-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:56:29.209041    8484 config.go:178] Loaded profile config "old-k8s-version-20220516225533-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0516 22:56:29.209041    8484 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:56:31.779659    8484 docker.go:137] docker version: linux-20.10.14
	I0516 22:56:31.786701    8484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:56:33.893265    8484 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1065461s)
	I0516 22:56:33.893265    8484 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:56:32.8284198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:56:33.897266    8484 out.go:177] * Using the docker driver based on user configuration
	I0516 22:56:33.900272    8484 start.go:284] selected driver: docker
	I0516 22:56:33.900272    8484 start.go:806] validating driver "docker" against <nil>
	I0516 22:56:33.900272    8484 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:56:34.176918    8484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:56:36.364310    8484 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1873736s)
	I0516 22:56:36.364310    8484 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:56:35.2824668 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:56:36.364310    8484 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 22:56:36.365322    8484 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 22:56:36.373316    8484 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 22:56:36.375316    8484 cni.go:95] Creating CNI manager for ""
	I0516 22:56:36.375316    8484 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 22:56:36.375316    8484 start_flags.go:306] config:
	{Name:embed-certs-20220516225628-2444 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220516225628-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:56:36.378314    8484 out.go:177] * Starting control plane node embed-certs-20220516225628-2444 in cluster embed-certs-20220516225628-2444
	I0516 22:56:36.381311    8484 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:56:36.384317    8484 out.go:177] * Pulling base image ...
	I0516 22:56:36.387307    8484 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 22:56:36.387307    8484 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:56:36.387307    8484 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 22:56:36.387307    8484 cache.go:57] Caching tarball of preloaded images
	I0516 22:56:36.387307    8484 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 22:56:36.387307    8484 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 22:56:36.387307    8484 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\embed-certs-20220516225628-2444\config.json ...
	I0516 22:56:36.388310    8484 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\embed-certs-20220516225628-2444\config.json: {Name:mkd3715f26870467e1e6ee4a62acecae6d7d844e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 22:56:37.449306    8484 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:56:37.449306    8484 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:56:37.449306    8484 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:56:37.449306    8484 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:56:37.449306    8484 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:56:37.449306    8484 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:56:37.449904    8484 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:56:37.449904    8484 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:56:37.449904    8484 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:56:39.715685    8484 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:56:39.715828    8484 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:56:39.715991    8484 start.go:352] acquiring machines lock for embed-certs-20220516225628-2444: {Name:mk313f3adfa614f48756e4c4bd1949083e33b93c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:56:39.716015    8484 start.go:356] acquired machines lock for "embed-certs-20220516225628-2444" in 0s
	I0516 22:56:39.716015    8484 start.go:91] Provisioning new machine with config: &{Name:embed-certs-20220516225628-2444 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220516225628-2444 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 22:56:39.716015    8484 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:56:39.720922    8484 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:56:39.720922    8484 start.go:165] libmachine.API.Create for "embed-certs-20220516225628-2444" (driver="docker")
	I0516 22:56:39.720922    8484 client.go:168] LocalClient.Create starting
	I0516 22:56:39.721650    8484 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:56:39.721650    8484 main.go:134] libmachine: Decoding PEM data...
	I0516 22:56:39.721650    8484 main.go:134] libmachine: Parsing certificate...
	I0516 22:56:39.721650    8484 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:56:39.722383    8484 main.go:134] libmachine: Decoding PEM data...
	I0516 22:56:39.722431    8484 main.go:134] libmachine: Parsing certificate...
	I0516 22:56:39.733588    8484 cli_runner.go:164] Run: docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:56:40.819577    8484 cli_runner.go:211] docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:56:40.819626    8484 cli_runner.go:217] Completed: docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0857586s)
	I0516 22:56:40.829072    8484 network_create.go:272] running [docker network inspect embed-certs-20220516225628-2444] to gather additional debugging logs...
	I0516 22:56:40.829072    8484 cli_runner.go:164] Run: docker network inspect embed-certs-20220516225628-2444
	W0516 22:56:41.890484    8484 cli_runner.go:211] docker network inspect embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:56:41.890484    8484 cli_runner.go:217] Completed: docker network inspect embed-certs-20220516225628-2444: (1.061403s)
	I0516 22:56:41.890484    8484 network_create.go:275] error running [docker network inspect embed-certs-20220516225628-2444]: docker network inspect embed-certs-20220516225628-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220516225628-2444
	I0516 22:56:41.890484    8484 network_create.go:277] output of [docker network inspect embed-certs-20220516225628-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220516225628-2444
	
	** /stderr **
	I0516 22:56:41.897471    8484 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:56:42.987862    8484 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0902523s)
	I0516 22:56:43.008069    8484 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000a58648] misses:0}
	I0516 22:56:43.009168    8484 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:43.009168    8484 network_create.go:115] attempt to create docker network embed-certs-20220516225628-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:56:43.020976    8484 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444
	W0516 22:56:44.083052    8484 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:56:44.083082    8484 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: (1.0617712s)
	W0516 22:56:44.083158    8484 network_create.go:107] failed to create docker network embed-certs-20220516225628-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:56:44.101477    8484 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a58648] amended:false}} dirty:map[] misses:0}
	I0516 22:56:44.101477    8484 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:44.118688    8484 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a58648] amended:true}} dirty:map[192.168.49.0:0xc000a58648 192.168.58.0:0xc0006244a8] misses:0}
	I0516 22:56:44.118688    8484 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:44.119421    8484 network_create.go:115] attempt to create docker network embed-certs-20220516225628-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:56:44.128992    8484 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444
	W0516 22:56:45.200931    8484 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:56:45.200962    8484 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: (1.071809s)
	W0516 22:56:45.201031    8484 network_create.go:107] failed to create docker network embed-certs-20220516225628-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:56:45.219557    8484 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a58648] amended:true}} dirty:map[192.168.49.0:0xc000a58648 192.168.58.0:0xc0006244a8] misses:1}
	I0516 22:56:45.219557    8484 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:45.238557    8484 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a58648] amended:true}} dirty:map[192.168.49.0:0xc000a58648 192.168.58.0:0xc0006244a8 192.168.67.0:0xc000a58730] misses:1}
	I0516 22:56:45.238622    8484 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:45.238684    8484 network_create.go:115] attempt to create docker network embed-certs-20220516225628-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:56:45.245604    8484 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444
	W0516 22:56:46.328995    8484 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:56:46.329070    8484 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: (1.0833821s)
	W0516 22:56:46.329141    8484 network_create.go:107] failed to create docker network embed-certs-20220516225628-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:56:46.347607    8484 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a58648] amended:true}} dirty:map[192.168.49.0:0xc000a58648 192.168.58.0:0xc0006244a8 192.168.67.0:0xc000a58730] misses:2}
	I0516 22:56:46.347607    8484 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:46.364697    8484 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a58648] amended:true}} dirty:map[192.168.49.0:0xc000a58648 192.168.58.0:0xc0006244a8 192.168.67.0:0xc000a58730 192.168.76.0:0xc000624540] misses:2}
	I0516 22:56:46.364697    8484 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:56:46.364697    8484 network_create.go:115] attempt to create docker network embed-certs-20220516225628-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:56:46.375799    8484 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444
	W0516 22:56:47.430400    8484 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:56:47.430400    8484 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: (1.0545156s)
	E0516 22:56:47.430400    8484 network_create.go:104] error while trying to create docker network embed-certs-20220516225628-2444 192.168.76.0/24: create docker network embed-certs-20220516225628-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network caaa51ba71eba8175967112d58f129ee86a4c9c06ab6e0726994e33be732307d (br-caaa51ba71eb): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:56:47.430400    8484 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220516225628-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network caaa51ba71eba8175967112d58f129ee86a4c9c06ab6e0726994e33be732307d (br-caaa51ba71eb): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220516225628-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network caaa51ba71eba8175967112d58f129ee86a4c9c06ab6e0726994e33be732307d (br-caaa51ba71eb): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:56:47.445401    8484 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:56:48.563878    8484 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1184676s)
	I0516 22:56:48.572635    8484 cli_runner.go:164] Run: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:56:49.611562    8484 cli_runner.go:211] docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:56:49.611562    8484 cli_runner.go:217] Completed: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0388532s)
	I0516 22:56:49.611562    8484 client.go:171] LocalClient.Create took 9.8905552s
	I0516 22:56:51.630180    8484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:56:51.637273    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:56:52.734384    8484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:56:52.734384    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0971017s)
	I0516 22:56:52.734384    8484 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:56:53.031209    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:56:54.111642    8484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:56:54.111642    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.080424s)
	W0516 22:56:54.111642    8484 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	W0516 22:56:54.111642    8484 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:56:54.121642    8484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:56:54.128643    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:56:55.227530    8484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:56:55.227775    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0978698s)
	I0516 22:56:55.227940    8484 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:56:55.540324    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:56:56.652678    8484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:56:56.652678    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.1123452s)
	W0516 22:56:56.652678    8484 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	W0516 22:56:56.652678    8484 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:56:56.652678    8484 start.go:134] duration metric: createHost completed in 16.9365181s
	I0516 22:56:56.652678    8484 start.go:81] releasing machines lock for "embed-certs-20220516225628-2444", held for 16.9365181s
	W0516 22:56:56.652678    8484 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for embed-certs-20220516225628-2444 container: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220516225628-2444: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220516225628-2444': mkdir /var/lib/docker/volumes/embed-certs-20220516225628-2444: read-only file system
	I0516 22:56:56.668715    8484 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:56:57.789077    8484 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:56:57.789077    8484 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.1191533s)
	I0516 22:56:57.789077    8484 delete.go:82] Unable to get host status for embed-certs-20220516225628-2444, assuming it has already been deleted: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	W0516 22:56:57.789077    8484 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for embed-certs-20220516225628-2444 container: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220516225628-2444: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220516225628-2444': mkdir /var/lib/docker/volumes/embed-certs-20220516225628-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for embed-certs-20220516225628-2444 container: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220516225628-2444: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220516225628-2444': mkdir /var/lib/docker/volumes/embed-certs-20220516225628-2444: read-only file system
	
	I0516 22:56:57.789077    8484 start.go:623] Will try again in 5 seconds ...
	I0516 22:57:02.789635    8484 start.go:352] acquiring machines lock for embed-certs-20220516225628-2444: {Name:mk313f3adfa614f48756e4c4bd1949083e33b93c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:57:02.790064    8484 start.go:356] acquired machines lock for "embed-certs-20220516225628-2444" in 182µs
	I0516 22:57:02.790064    8484 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:57:02.790064    8484 fix.go:55] fixHost starting: 
	I0516 22:57:02.805354    8484 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:57:03.905139    8484 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:03.905139    8484 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0997417s)
	I0516 22:57:03.905139    8484 fix.go:103] recreateIfNeeded on embed-certs-20220516225628-2444: state= err=unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:03.905139    8484 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:57:03.909352    8484 out.go:177] * docker "embed-certs-20220516225628-2444" container is missing, will recreate.
	I0516 22:57:03.911774    8484 delete.go:124] DEMOLISHING embed-certs-20220516225628-2444 ...
	I0516 22:57:03.929057    8484 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:57:05.047254    8484 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:05.047316    8484 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.1180597s)
	W0516 22:57:05.048467    8484 stop.go:75] unable to get state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:05.048589    8484 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:05.076774    8484 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:57:06.191895    8484 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:06.191895    8484 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.1151122s)
	I0516 22:57:06.191895    8484 delete.go:82] Unable to get host status for embed-certs-20220516225628-2444, assuming it has already been deleted: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:06.201317    8484 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220516225628-2444
	W0516 22:57:07.282252    8484 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:57:07.282252    8484 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220516225628-2444: (1.080926s)
	I0516 22:57:07.282252    8484 kic.go:356] could not find the container embed-certs-20220516225628-2444 to remove it. will try anyways
	I0516 22:57:07.289111    8484 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:57:08.357997    8484 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:08.357997    8484 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0688775s)
	W0516 22:57:08.357997    8484 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:08.365901    8484 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-20220516225628-2444 /bin/bash -c "sudo init 0"
	W0516 22:57:09.433782    8484 cli_runner.go:211] docker exec --privileged -t embed-certs-20220516225628-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:57:09.433845    8484 cli_runner.go:217] Completed: docker exec --privileged -t embed-certs-20220516225628-2444 /bin/bash -c "sudo init 0": (1.0675866s)
	I0516 22:57:09.433845    8484 oci.go:641] error shutdown embed-certs-20220516225628-2444: docker exec --privileged -t embed-certs-20220516225628-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:10.447952    8484 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:57:11.513660    8484 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:11.513660    8484 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.065699s)
	I0516 22:57:11.513660    8484 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:11.513660    8484 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 22:57:11.513660    8484 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:11.997356    8484 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:57:13.091838    8484 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:13.091838    8484 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0942482s)
	I0516 22:57:13.091838    8484 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:13.091838    8484 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 22:57:13.091838    8484 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:14.004114    8484 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:57:15.052370    8484 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:15.052511    8484 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0482469s)
	I0516 22:57:15.052653    8484 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:15.052653    8484 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 22:57:15.052653    8484 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:15.707910    8484 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:57:16.778156    8484 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:16.778345    8484 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0702368s)
	I0516 22:57:16.778345    8484 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:16.778345    8484 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 22:57:16.778345    8484 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:17.903339    8484 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:57:19.051823    8484 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:19.051928    8484 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.1484747s)
	I0516 22:57:19.051986    8484 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:19.051986    8484 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 22:57:19.051986    8484 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:20.577823    8484 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:57:21.642219    8484 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:21.642219    8484 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0643864s)
	I0516 22:57:21.642219    8484 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:21.642219    8484 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 22:57:21.642219    8484 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:24.697640    8484 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:57:25.776645    8484 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:25.776859    8484 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0789511s)
	I0516 22:57:25.776977    8484 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:25.777007    8484 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 22:57:25.777066    8484 oci.go:88] couldn't shut down embed-certs-20220516225628-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	 
	I0516 22:57:25.786008    8484 cli_runner.go:164] Run: docker rm -f -v embed-certs-20220516225628-2444
	I0516 22:57:26.869530    8484 cli_runner.go:217] Completed: docker rm -f -v embed-certs-20220516225628-2444: (1.0833192s)
	I0516 22:57:26.877519    8484 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220516225628-2444
	W0516 22:57:27.936027    8484 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:57:27.936027    8484 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220516225628-2444: (1.0584987s)
	I0516 22:57:27.943033    8484 cli_runner.go:164] Run: docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:57:29.014863    8484 cli_runner.go:211] docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:57:29.014943    8484 cli_runner.go:217] Completed: docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0717747s)
	I0516 22:57:29.024997    8484 network_create.go:272] running [docker network inspect embed-certs-20220516225628-2444] to gather additional debugging logs...
	I0516 22:57:29.024997    8484 cli_runner.go:164] Run: docker network inspect embed-certs-20220516225628-2444
	W0516 22:57:30.093473    8484 cli_runner.go:211] docker network inspect embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:57:30.093473    8484 cli_runner.go:217] Completed: docker network inspect embed-certs-20220516225628-2444: (1.0684674s)
	I0516 22:57:30.093473    8484 network_create.go:275] error running [docker network inspect embed-certs-20220516225628-2444]: docker network inspect embed-certs-20220516225628-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220516225628-2444
	I0516 22:57:30.093473    8484 network_create.go:277] output of [docker network inspect embed-certs-20220516225628-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220516225628-2444
	
	** /stderr **
	W0516 22:57:30.094500    8484 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:57:30.094500    8484 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:57:31.098142    8484 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:57:31.101509    8484 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:57:31.105936    8484 start.go:165] libmachine.API.Create for "embed-certs-20220516225628-2444" (driver="docker")
	I0516 22:57:31.105936    8484 client.go:168] LocalClient.Create starting
	I0516 22:57:31.106597    8484 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:57:31.106597    8484 main.go:134] libmachine: Decoding PEM data...
	I0516 22:57:31.106597    8484 main.go:134] libmachine: Parsing certificate...
	I0516 22:57:31.106597    8484 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:57:31.107228    8484 main.go:134] libmachine: Decoding PEM data...
	I0516 22:57:31.107261    8484 main.go:134] libmachine: Parsing certificate...
	I0516 22:57:31.116211    8484 cli_runner.go:164] Run: docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:57:32.168497    8484 cli_runner.go:211] docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:57:32.168639    8484 cli_runner.go:217] Completed: docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0517073s)
	I0516 22:57:32.176098    8484 network_create.go:272] running [docker network inspect embed-certs-20220516225628-2444] to gather additional debugging logs...
	I0516 22:57:32.176098    8484 cli_runner.go:164] Run: docker network inspect embed-certs-20220516225628-2444
	W0516 22:57:33.272180    8484 cli_runner.go:211] docker network inspect embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:57:33.272180    8484 cli_runner.go:217] Completed: docker network inspect embed-certs-20220516225628-2444: (1.0960726s)
	I0516 22:57:33.272180    8484 network_create.go:275] error running [docker network inspect embed-certs-20220516225628-2444]: docker network inspect embed-certs-20220516225628-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220516225628-2444
	I0516 22:57:33.272180    8484 network_create.go:277] output of [docker network inspect embed-certs-20220516225628-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220516225628-2444
	
	** /stderr **
	I0516 22:57:33.280937    8484 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:57:34.315042    8484 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.033938s)
	I0516 22:57:34.341323    8484 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a58648] amended:true}} dirty:map[192.168.49.0:0xc000a58648 192.168.58.0:0xc0006244a8 192.168.67.0:0xc000a58730 192.168.76.0:0xc000624540] misses:2}
	I0516 22:57:34.341323    8484 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:57:34.357325    8484 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a58648] amended:true}} dirty:map[192.168.49.0:0xc000a58648 192.168.58.0:0xc0006244a8 192.168.67.0:0xc000a58730 192.168.76.0:0xc000624540] misses:3}
	I0516 22:57:34.357325    8484 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:57:34.371318    8484 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a58648 192.168.58.0:0xc0006244a8 192.168.67.0:0xc000a58730 192.168.76.0:0xc000624540] amended:false}} dirty:map[] misses:0}
	I0516 22:57:34.371318    8484 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:57:34.386347    8484 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a58648 192.168.58.0:0xc0006244a8 192.168.67.0:0xc000a58730 192.168.76.0:0xc000624540] amended:false}} dirty:map[] misses:0}
	I0516 22:57:34.386347    8484 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:57:34.401359    8484 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a58648 192.168.58.0:0xc0006244a8 192.168.67.0:0xc000a58730 192.168.76.0:0xc000624540] amended:true}} dirty:map[192.168.49.0:0xc000a58648 192.168.58.0:0xc0006244a8 192.168.67.0:0xc000a58730 192.168.76.0:0xc000624540 192.168.85.0:0xc000a58710] misses:0}
	I0516 22:57:34.401359    8484 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:57:34.401359    8484 network_create.go:115] attempt to create docker network embed-certs-20220516225628-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:57:34.409355    8484 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444
	W0516 22:57:35.471886    8484 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:57:35.471886    8484 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: (1.0625217s)
	E0516 22:57:35.471886    8484 network_create.go:104] error while trying to create docker network embed-certs-20220516225628-2444 192.168.85.0/24: create docker network embed-certs-20220516225628-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6edba793edf04f88092d8d6c7b1ec7337610438ec965c5d82952b23245c6815f (br-6edba793edf0): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:57:35.471886    8484 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220516225628-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6edba793edf04f88092d8d6c7b1ec7337610438ec965c5d82952b23245c6815f (br-6edba793edf0): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220516225628-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6edba793edf04f88092d8d6c7b1ec7337610438ec965c5d82952b23245c6815f (br-6edba793edf0): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:57:35.485907    8484 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:57:36.532441    8484 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0465249s)
	I0516 22:57:36.542363    8484 cli_runner.go:164] Run: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:57:37.624999    8484 cli_runner.go:211] docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:57:37.624999    8484 cli_runner.go:217] Completed: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0826021s)
	I0516 22:57:37.624999    8484 client.go:171] LocalClient.Create took 6.5190075s
	I0516 22:57:39.638458    8484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:57:39.647882    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:57:40.723542    8484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:57:40.723542    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0756513s)
	I0516 22:57:40.723542    8484 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:41.067429    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:57:42.120679    8484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:57:42.120679    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0532417s)
	W0516 22:57:42.120679    8484 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	W0516 22:57:42.120679    8484 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:42.130680    8484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:57:42.137683    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:57:43.231977    8484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:57:43.232052    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0939944s)
	I0516 22:57:43.232230    8484 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:43.468116    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:57:44.541794    8484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:57:44.541794    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0734602s)
	W0516 22:57:44.541794    8484 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	W0516 22:57:44.541794    8484 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:44.541794    8484 start.go:134] duration metric: createHost completed in 13.4433478s
	I0516 22:57:44.554211    8484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:57:44.561540    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:57:45.637299    8484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:57:45.637371    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0755657s)
	I0516 22:57:45.637371    8484 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:45.898014    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:57:46.986226    8484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:57:46.986226    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0882027s)
	W0516 22:57:46.986226    8484 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	W0516 22:57:46.986226    8484 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:46.997239    8484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:57:47.004228    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:57:48.062565    8484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:57:48.062565    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0583274s)
	I0516 22:57:48.062565    8484 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:48.275660    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:57:49.383364    8484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:57:49.383364    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.1076946s)
	W0516 22:57:49.383364    8484 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	W0516 22:57:49.383364    8484 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:57:49.383364    8484 fix.go:57] fixHost completed within 46.5929042s
	I0516 22:57:49.383364    8484 start.go:81] releasing machines lock for "embed-certs-20220516225628-2444", held for 46.5929042s
	W0516 22:57:49.383364    8484 out.go:239] * Failed to start docker container. Running "minikube delete -p embed-certs-20220516225628-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220516225628-2444 container: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220516225628-2444: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220516225628-2444': mkdir /var/lib/docker/volumes/embed-certs-20220516225628-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p embed-certs-20220516225628-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220516225628-2444 container: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220516225628-2444: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220516225628-2444': mkdir /var/lib/docker/volumes/embed-certs-20220516225628-2444: read-only file system
	
	I0516 22:57:49.392374    8484 out.go:177] 
	W0516 22:57:49.395382    8484 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220516225628-2444 container: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220516225628-2444: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220516225628-2444': mkdir /var/lib/docker/volumes/embed-certs-20220516225628-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220516225628-2444 container: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220516225628-2444: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220516225628-2444': mkdir /var/lib/docker/volumes/embed-certs-20220516225628-2444: read-only file system
	
	W0516 22:57:49.395382    8484 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 22:57:49.395382    8484 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 22:57:49.398371    8484 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p embed-certs-20220516225628-2444 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220516225628-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220516225628-2444: exit status 1 (1.1148515s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444: exit status 7 (2.916755s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:57:53.522669    2976 status.go:247] status error: host: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220516225628-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (84.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220516225533-2444 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220516225533-2444 create -f testdata\busybox.yaml: exit status 1 (245.5477ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220516225533-2444" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context old-k8s-version-20220516225533-2444 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220516225533-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220516225533-2444: exit status 1 (1.1098439s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444: exit status 7 (2.9406057s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:57:04.198862    3684 status.go:247] status error: host: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220516225533-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220516225533-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220516225533-2444: exit status 1 (1.1362834s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444: exit status 7 (2.934358s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:57:08.281595    3692 status.go:247] status error: host: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220516225533-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (7.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220516225533-2444 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220516225533-2444 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.9466379s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220516225533-2444 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220516225533-2444 describe deploy/metrics-server -n kube-system: exit status 1 (248.1595ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220516225533-2444" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20220516225533-2444 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220516225533-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220516225533-2444: exit status 1 (1.1127963s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444: exit status 7 (2.8883879s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:57:15.492091    8800 status.go:247] status error: host: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220516225533-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (7.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (26.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-20220516225533-2444 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p old-k8s-version-20220516225533-2444 --alsologtostderr -v=3: exit status 82 (22.673574s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-20220516225533-2444"  ...
	* Stopping node "old-k8s-version-20220516225533-2444"  ...
	* Stopping node "old-k8s-version-20220516225533-2444"  ...
	* Stopping node "old-k8s-version-20220516225533-2444"  ...
	* Stopping node "old-k8s-version-20220516225533-2444"  ...
	* Stopping node "old-k8s-version-20220516225533-2444"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:57:15.790489    7592 out.go:296] Setting OutFile to fd 1800 ...
	I0516 22:57:15.872369    7592 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:57:15.873371    7592 out.go:309] Setting ErrFile to fd 276...
	I0516 22:57:15.873371    7592 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:57:15.885372    7592 out.go:303] Setting JSON to false
	I0516 22:57:15.885372    7592 daemonize_windows.go:44] trying to kill existing schedule stop for profile old-k8s-version-20220516225533-2444...
	I0516 22:57:15.897816    7592 ssh_runner.go:195] Run: systemctl --version
	I0516 22:57:15.897816    7592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:57:18.494526    7592 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:57:18.494526    7592 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (2.596688s)
	I0516 22:57:18.507329    7592 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0516 22:57:18.515382    7592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:57:19.605076    7592 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:57:19.605076    7592 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0896851s)
	I0516 22:57:19.605076    7592 retry.go:31] will retry after 360.127272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:57:19.976986    7592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:57:21.043637    7592 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:57:21.043791    7592 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0665013s)
	I0516 22:57:21.043991    7592 openrc.go:165] stop output: 
	E0516 22:57:21.044041    7592 daemonize_windows.go:38] error terminating scheduled stop for profile old-k8s-version-20220516225533-2444: stopping schedule-stop service for profile old-k8s-version-20220516225533-2444: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:57:21.044076    7592 mustload.go:65] Loading cluster: old-k8s-version-20220516225533-2444
	I0516 22:57:21.044972    7592 config.go:178] Loaded profile config "old-k8s-version-20220516225533-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0516 22:57:21.045189    7592 stop.go:39] StopHost: old-k8s-version-20220516225533-2444
	I0516 22:57:21.049990    7592 out.go:177] * Stopping node "old-k8s-version-20220516225533-2444"  ...
	I0516 22:57:21.067857    7592 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:57:22.144928    7592 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:22.145029    7592 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0769145s)
	W0516 22:57:22.145105    7592 stop.go:75] unable to get state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	W0516 22:57:22.145190    7592 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:57:22.145247    7592 retry.go:31] will retry after 937.714187ms: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:57:23.088840    7592 stop.go:39] StopHost: old-k8s-version-20220516225533-2444
	I0516 22:57:23.095095    7592 out.go:177] * Stopping node "old-k8s-version-20220516225533-2444"  ...
	I0516 22:57:23.112865    7592 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:57:24.164544    7592 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:24.164544    7592 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0516697s)
	W0516 22:57:24.164544    7592 stop.go:75] unable to get state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	W0516 22:57:24.164544    7592 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:57:24.164544    7592 retry.go:31] will retry after 1.386956246s: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:57:25.560071    7592 stop.go:39] StopHost: old-k8s-version-20220516225533-2444
	I0516 22:57:25.564176    7592 out.go:177] * Stopping node "old-k8s-version-20220516225533-2444"  ...
	I0516 22:57:25.584411    7592 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:57:26.663559    7592 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:26.663559    7592 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0791121s)
	W0516 22:57:26.663559    7592 stop.go:75] unable to get state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	W0516 22:57:26.663559    7592 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:57:26.663559    7592 retry.go:31] will retry after 2.670351914s: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:57:29.347677    7592 stop.go:39] StopHost: old-k8s-version-20220516225533-2444
	I0516 22:57:29.353709    7592 out.go:177] * Stopping node "old-k8s-version-20220516225533-2444"  ...
	I0516 22:57:29.370756    7592 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:57:30.424908    7592 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:30.425035    7592 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0539028s)
	W0516 22:57:30.425058    7592 stop.go:75] unable to get state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	W0516 22:57:30.425058    7592 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:57:30.425058    7592 retry.go:31] will retry after 1.909024939s: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:57:32.334656    7592 stop.go:39] StopHost: old-k8s-version-20220516225533-2444
	I0516 22:57:32.339657    7592 out.go:177] * Stopping node "old-k8s-version-20220516225533-2444"  ...
	I0516 22:57:32.358737    7592 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:57:33.446754    7592 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:33.446754    7592 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0880079s)
	W0516 22:57:33.446754    7592 stop.go:75] unable to get state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	W0516 22:57:33.446754    7592 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:57:33.446754    7592 retry.go:31] will retry after 3.323628727s: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:57:36.786032    7592 stop.go:39] StopHost: old-k8s-version-20220516225533-2444
	I0516 22:57:36.791022    7592 out.go:177] * Stopping node "old-k8s-version-20220516225533-2444"  ...
	I0516 22:57:36.810265    7592 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:57:37.894738    7592 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:37.894961    7592 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.084464s)
	W0516 22:57:37.895025    7592 stop.go:75] unable to get state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	W0516 22:57:37.895025    7592 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:57:37.902183    7592 out.go:177] 
	W0516 22:57:37.904912    7592 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect old-k8s-version-20220516225533-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect old-k8s-version-20220516225533-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	W0516 22:57:37.905018    7592 out.go:239] * 
	* 
	W0516 22:57:38.135914    7592 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_39.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_39.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0516 22:57:38.163364    7592 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p old-k8s-version-20220516225533-2444 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220516225533-2444

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220516225533-2444: exit status 1 (1.1140691s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444: exit status 7 (2.9095334s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:57:42.200670    6536 status.go:247] status error: host: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220516225533-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (26.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220516225557-2444 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context no-preload-20220516225557-2444 create -f testdata\busybox.yaml: exit status 1 (241.3227ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220516225557-2444" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context no-preload-20220516225557-2444 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220516225557-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220516225557-2444: exit status 1 (1.0667717s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444: exit status 7 (2.8777941s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:57:27.073638    7220 status.go:247] status error: host: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220516225557-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220516225557-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220516225557-2444: exit status 1 (1.1488761s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444: exit status 7 (2.8648611s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:57:31.098714    8228 status.go:247] status error: host: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220516225557-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (7.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220516225557-2444 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220516225557-2444 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.935182s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220516225557-2444 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context no-preload-20220516225557-2444 describe deploy/metrics-server -n kube-system: exit status 1 (243.9646ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220516225557-2444" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context no-preload-20220516225557-2444 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220516225557-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220516225557-2444: exit status 1 (1.1001103s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444: exit status 7 (2.8958096s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:57:38.289995    1064 status.go:247] status error: host: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220516225557-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (7.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (26.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-20220516225557-2444 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p no-preload-20220516225557-2444 --alsologtostderr -v=3: exit status 82 (22.5078223s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-20220516225557-2444"  ...
	* Stopping node "no-preload-20220516225557-2444"  ...
	* Stopping node "no-preload-20220516225557-2444"  ...
	* Stopping node "no-preload-20220516225557-2444"  ...
	* Stopping node "no-preload-20220516225557-2444"  ...
	* Stopping node "no-preload-20220516225557-2444"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:57:38.557989    8528 out.go:296] Setting OutFile to fd 1944 ...
	I0516 22:57:38.619976    8528 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:57:38.619976    8528 out.go:309] Setting ErrFile to fd 1572...
	I0516 22:57:38.619976    8528 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:57:38.633971    8528 out.go:303] Setting JSON to false
	I0516 22:57:38.634990    8528 daemonize_windows.go:44] trying to kill existing schedule stop for profile no-preload-20220516225557-2444...
	I0516 22:57:38.646969    8528 ssh_runner.go:195] Run: systemctl --version
	I0516 22:57:38.653969    8528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:57:41.135624    8528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:57:41.135624    8528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (2.4816342s)
	I0516 22:57:41.145624    8528 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0516 22:57:41.153627    8528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:57:42.232680    8528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:57:42.232680    8528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0790441s)
	I0516 22:57:42.232680    8528 retry.go:31] will retry after 360.127272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:42.605430    8528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:57:43.675347    8528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:57:43.675347    8528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0691784s)
	I0516 22:57:43.675347    8528 openrc.go:165] stop output: 
	E0516 22:57:43.675347    8528 daemonize_windows.go:38] error terminating scheduled stop for profile no-preload-20220516225557-2444: stopping schedule-stop service for profile no-preload-20220516225557-2444: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:43.675347    8528 mustload.go:65] Loading cluster: no-preload-20220516225557-2444
	I0516 22:57:43.676041    8528 config.go:178] Loaded profile config "no-preload-20220516225557-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:57:43.676041    8528 stop.go:39] StopHost: no-preload-20220516225557-2444
	I0516 22:57:43.680940    8528 out.go:177] * Stopping node "no-preload-20220516225557-2444"  ...
	I0516 22:57:43.702620    8528 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:57:44.808249    8528 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:44.808249    8528 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.1055656s)
	W0516 22:57:44.808249    8528 stop.go:75] unable to get state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	W0516 22:57:44.808249    8528 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:44.808249    8528 retry.go:31] will retry after 937.714187ms: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:45.748936    8528 stop.go:39] StopHost: no-preload-20220516225557-2444
	I0516 22:57:45.754943    8528 out.go:177] * Stopping node "no-preload-20220516225557-2444"  ...
	I0516 22:57:45.770417    8528 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:57:46.829083    8528 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:46.829083    8528 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0584287s)
	W0516 22:57:46.829083    8528 stop.go:75] unable to get state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	W0516 22:57:46.829083    8528 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:46.829083    8528 retry.go:31] will retry after 1.386956246s: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:48.221189    8528 stop.go:39] StopHost: no-preload-20220516225557-2444
	I0516 22:57:48.226264    8528 out.go:177] * Stopping node "no-preload-20220516225557-2444"  ...
	I0516 22:57:48.247303    8528 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:57:49.351974    8528 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:49.352052    8528 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.1044388s)
	W0516 22:57:49.352205    8528 stop.go:75] unable to get state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	W0516 22:57:49.352287    8528 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:49.352373    8528 retry.go:31] will retry after 2.670351914s: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:52.023968    8528 stop.go:39] StopHost: no-preload-20220516225557-2444
	I0516 22:57:52.030583    8528 out.go:177] * Stopping node "no-preload-20220516225557-2444"  ...
	I0516 22:57:52.053352    8528 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:57:53.141769    8528 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:53.141769    8528 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.087608s)
	W0516 22:57:53.141769    8528 stop.go:75] unable to get state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	W0516 22:57:53.141769    8528 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:53.141769    8528 retry.go:31] will retry after 1.909024939s: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:55.063604    8528 stop.go:39] StopHost: no-preload-20220516225557-2444
	I0516 22:57:55.067604    8528 out.go:177] * Stopping node "no-preload-20220516225557-2444"  ...
	I0516 22:57:55.087616    8528 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:57:56.131038    8528 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:57:56.131038    8528 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0432634s)
	W0516 22:57:56.131038    8528 stop.go:75] unable to get state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	W0516 22:57:56.131038    8528 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:56.131038    8528 retry.go:31] will retry after 3.323628727s: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:57:59.455897    8528 stop.go:39] StopHost: no-preload-20220516225557-2444
	I0516 22:57:59.460471    8528 out.go:177] * Stopping node "no-preload-20220516225557-2444"  ...
	I0516 22:57:59.477309    8528 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:58:00.549423    8528 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:00.549423    8528 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0719865s)
	W0516 22:58:00.549423    8528 stop.go:75] unable to get state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	W0516 22:58:00.549423    8528 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:00.552774    8528 out.go:177] 
	W0516 22:58:00.555777    8528 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect no-preload-20220516225557-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect no-preload-20220516225557-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	W0516 22:58:00.555777    8528 out.go:239] * 
	* 
	W0516 22:58:00.789867    8528 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_39.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_39.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0516 22:58:00.794626    8528 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p no-preload-20220516225557-2444 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220516225557-2444

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220516225557-2444: exit status 1 (1.1424268s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444: exit status 7 (2.9070455s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:58:04.861726    2744 status.go:247] status error: host: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220516225557-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (26.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444: exit status 7 (2.8882356s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:57:45.090511    5232 status.go:247] status error: host: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220516225533-2444 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220516225533-2444 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.8942301s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220516225533-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220516225533-2444: exit status 1 (1.1328488s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444: exit status 7 (2.9307776s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:57:52.056047    9212 status.go:247] status error: host: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220516225533-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (9.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (121.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220516225533-2444 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-20220516225533-2444 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: exit status 60 (1m57.6281539s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220516225533-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20220516225533-2444 in cluster old-k8s-version-20220516225533-2444
	* Pulling base image ...
	* docker "old-k8s-version-20220516225533-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-20220516225533-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:57:52.313017    6796 out.go:296] Setting OutFile to fd 1388 ...
	I0516 22:57:52.380335    6796 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:57:52.380418    6796 out.go:309] Setting ErrFile to fd 1592...
	I0516 22:57:52.380544    6796 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:57:52.393396    6796 out.go:303] Setting JSON to false
	I0516 22:57:52.395387    6796 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4984,"bootTime":1652736888,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:57:52.395387    6796 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:57:52.403382    6796 out.go:177] * [old-k8s-version-20220516225533-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:57:52.405675    6796 notify.go:193] Checking for updates...
	I0516 22:57:52.408410    6796 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:57:52.410395    6796 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:57:52.412392    6796 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:57:52.415398    6796 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:57:52.418389    6796 config.go:178] Loaded profile config "old-k8s-version-20220516225533-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0516 22:57:52.422396    6796 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0516 22:57:52.424401    6796 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:57:55.079638    6796 docker.go:137] docker version: linux-20.10.14
	I0516 22:57:55.088618    6796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:57:57.116444    6796 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0276646s)
	I0516 22:57:57.117325    6796 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:57:56.0799998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:57:57.122742    6796 out.go:177] * Using the docker driver based on existing profile
	I0516 22:57:57.125322    6796 start.go:284] selected driver: docker
	I0516 22:57:57.125322    6796 start.go:806] validating driver "docker" against &{Name:old-k8s-version-20220516225533-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220516225533-2444 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:57:57.125322    6796 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:57:57.246341    6796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:57:59.330414    6796 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0837429s)
	I0516 22:57:59.330414    6796 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:57:58.2471059 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:57:59.331144    6796 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 22:57:59.331144    6796 cni.go:95] Creating CNI manager for ""
	I0516 22:57:59.331144    6796 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 22:57:59.331144    6796 start_flags.go:306] config:
	{Name:old-k8s-version-20220516225533-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220516225533-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\je
nkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:57:59.338330    6796 out.go:177] * Starting control plane node old-k8s-version-20220516225533-2444 in cluster old-k8s-version-20220516225533-2444
	I0516 22:57:59.340225    6796 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:57:59.343223    6796 out.go:177] * Pulling base image ...
	I0516 22:57:59.345992    6796 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0516 22:57:59.345992    6796 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:57:59.346259    6796 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0516 22:57:59.346259    6796 cache.go:57] Caching tarball of preloaded images
	I0516 22:57:59.346434    6796 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 22:57:59.346744    6796 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0516 22:57:59.346744    6796 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-20220516225533-2444\config.json ...
	I0516 22:58:00.409321    6796 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:58:00.409385    6796 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:58:00.409385    6796 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:58:00.409385    6796 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:58:00.409385    6796 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:58:00.409385    6796 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:58:00.410058    6796 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:58:00.410140    6796 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:58:00.410206    6796 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:58:02.901245    6796 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:58:02.901310    6796 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:58:02.901418    6796 start.go:352] acquiring machines lock for old-k8s-version-20220516225533-2444: {Name:mk5023de8a7eabf3a3502247916ec67ae4aced29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:58:02.901637    6796 start.go:356] acquired machines lock for "old-k8s-version-20220516225533-2444" in 219.2µs
	I0516 22:58:02.901857    6796 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:58:02.901927    6796 fix.go:55] fixHost starting: 
	I0516 22:58:02.918852    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:58:03.961584    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:03.961728    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0425477s)
	I0516 22:58:03.961728    6796 fix.go:103] recreateIfNeeded on old-k8s-version-20220516225533-2444: state= err=unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:03.961728    6796 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:58:03.965953    6796 out.go:177] * docker "old-k8s-version-20220516225533-2444" container is missing, will recreate.
	I0516 22:58:03.968693    6796 delete.go:124] DEMOLISHING old-k8s-version-20220516225533-2444 ...
	I0516 22:58:03.983714    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:58:05.096640    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:05.096640    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.1119049s)
	W0516 22:58:05.096640    6796 stop.go:75] unable to get state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:05.096640    6796 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:05.113665    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:58:06.193887    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:06.193887    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0802126s)
	I0516 22:58:06.193887    6796 delete.go:82] Unable to get host status for old-k8s-version-20220516225533-2444, assuming it has already been deleted: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:06.201870    6796 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444
	W0516 22:58:07.255073    6796 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:58:07.255296    6796 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444: (1.0531948s)
	I0516 22:58:07.255296    6796 kic.go:356] could not find the container old-k8s-version-20220516225533-2444 to remove it. will try anyways
	I0516 22:58:07.263254    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:58:08.362792    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:08.362792    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0995281s)
	W0516 22:58:08.362792    6796 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:08.373358    6796 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-20220516225533-2444 /bin/bash -c "sudo init 0"
	W0516 22:58:09.433324    6796 cli_runner.go:211] docker exec --privileged -t old-k8s-version-20220516225533-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:58:09.433324    6796 cli_runner.go:217] Completed: docker exec --privileged -t old-k8s-version-20220516225533-2444 /bin/bash -c "sudo init 0": (1.0599565s)
	I0516 22:58:09.433324    6796 oci.go:641] error shutdown old-k8s-version-20220516225533-2444: docker exec --privileged -t old-k8s-version-20220516225533-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:10.442271    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:58:11.527037    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:11.527359    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0847568s)
	I0516 22:58:11.527476    6796 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:11.527540    6796 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:58:11.527653    6796 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:12.101382    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:58:13.168980    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:13.169030    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0674231s)
	I0516 22:58:13.169030    6796 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:13.169030    6796 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:58:13.169030    6796 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:14.264748    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:58:15.343465    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:15.343496    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0785158s)
	I0516 22:58:15.343607    6796 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:15.343650    6796 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:58:15.343688    6796 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:16.673222    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:58:17.725587    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:17.725587    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0523015s)
	I0516 22:58:17.725587    6796 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:17.725587    6796 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:58:17.725587    6796 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:19.323077    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:58:20.415578    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:20.415578    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0924607s)
	I0516 22:58:20.415911    6796 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:20.415945    6796 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:58:20.415991    6796 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:22.780392    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:58:23.870309    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:23.870445    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0897708s)
	I0516 22:58:23.870445    6796 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:23.870445    6796 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:58:23.870445    6796 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:28.402573    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:58:29.464970    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:29.465257    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.062388s)
	I0516 22:58:29.465257    6796 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:29.465257    6796 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:58:29.465257    6796 oci.go:88] couldn't shut down old-k8s-version-20220516225533-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	 
	I0516 22:58:29.474325    6796 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-20220516225533-2444
	I0516 22:58:30.553822    6796 cli_runner.go:217] Completed: docker rm -f -v old-k8s-version-20220516225533-2444: (1.0794875s)
	I0516 22:58:30.563174    6796 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444
	W0516 22:58:31.656858    6796 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:58:31.656858    6796 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444: (1.093674s)
	I0516 22:58:31.656858    6796 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:58:32.768205    6796 cli_runner.go:211] docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:58:32.768205    6796 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1113379s)
	I0516 22:58:32.777236    6796 network_create.go:272] running [docker network inspect old-k8s-version-20220516225533-2444] to gather additional debugging logs...
	I0516 22:58:32.777236    6796 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220516225533-2444
	W0516 22:58:33.871056    6796 cli_runner.go:211] docker network inspect old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:58:33.871166    6796 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220516225533-2444: (1.0934867s)
	I0516 22:58:33.871166    6796 network_create.go:275] error running [docker network inspect old-k8s-version-20220516225533-2444]: docker network inspect old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220516225533-2444
	I0516 22:58:33.871166    6796 network_create.go:277] output of [docker network inspect old-k8s-version-20220516225533-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220516225533-2444
	
	** /stderr **
	W0516 22:58:33.872017    6796 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:58:33.872017    6796 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:58:34.876200    6796 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:58:34.883384    6796 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:58:34.883384    6796 start.go:165] libmachine.API.Create for "old-k8s-version-20220516225533-2444" (driver="docker")
	I0516 22:58:34.883384    6796 client.go:168] LocalClient.Create starting
	I0516 22:58:34.884530    6796 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:58:34.884530    6796 main.go:134] libmachine: Decoding PEM data...
	I0516 22:58:34.885095    6796 main.go:134] libmachine: Parsing certificate...
	I0516 22:58:34.885287    6796 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:58:34.885287    6796 main.go:134] libmachine: Decoding PEM data...
	I0516 22:58:34.885287    6796 main.go:134] libmachine: Parsing certificate...
	I0516 22:58:34.894190    6796 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:58:35.988880    6796 cli_runner.go:211] docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:58:35.988880    6796 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0946804s)
	I0516 22:58:35.996879    6796 network_create.go:272] running [docker network inspect old-k8s-version-20220516225533-2444] to gather additional debugging logs...
	I0516 22:58:35.996879    6796 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220516225533-2444
	W0516 22:58:37.069902    6796 cli_runner.go:211] docker network inspect old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:58:37.070063    6796 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220516225533-2444: (1.0730134s)
	I0516 22:58:37.070063    6796 network_create.go:275] error running [docker network inspect old-k8s-version-20220516225533-2444]: docker network inspect old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220516225533-2444
	I0516 22:58:37.070123    6796 network_create.go:277] output of [docker network inspect old-k8s-version-20220516225533-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220516225533-2444
	
	** /stderr **
	I0516 22:58:37.078614    6796 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:58:38.147650    6796 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0687796s)
	I0516 22:58:38.166380    6796 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000063d0] misses:0}
	I0516 22:58:38.166380    6796 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:58:38.166380    6796 network_create.go:115] attempt to create docker network old-k8s-version-20220516225533-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:58:38.182215    6796 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444
	W0516 22:58:39.267606    6796 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:58:39.267606    6796 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: (1.0849451s)
	W0516 22:58:39.267606    6796 network_create.go:107] failed to create docker network old-k8s-version-20220516225533-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:58:39.284282    6796 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000063d0] amended:false}} dirty:map[] misses:0}
	I0516 22:58:39.284282    6796 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:58:39.299258    6796 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000063d0] amended:true}} dirty:map[192.168.49.0:0xc0000063d0 192.168.58.0:0xc000610280] misses:0}
	I0516 22:58:39.299258    6796 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:58:39.299258    6796 network_create.go:115] attempt to create docker network old-k8s-version-20220516225533-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:58:39.306285    6796 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444
	W0516 22:58:40.393753    6796 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:58:40.393753    6796 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: (1.0871209s)
	W0516 22:58:40.393753    6796 network_create.go:107] failed to create docker network old-k8s-version-20220516225533-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:58:40.411095    6796 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000063d0] amended:true}} dirty:map[192.168.49.0:0xc0000063d0 192.168.58.0:0xc000610280] misses:1}
	I0516 22:58:40.411813    6796 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:58:40.428617    6796 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000063d0] amended:true}} dirty:map[192.168.49.0:0xc0000063d0 192.168.58.0:0xc000610280 192.168.67.0:0xc0006103b8] misses:1}
	I0516 22:58:40.429199    6796 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:58:40.429199    6796 network_create.go:115] attempt to create docker network old-k8s-version-20220516225533-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:58:40.439185    6796 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444
	W0516 22:58:41.521094    6796 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:58:41.521134    6796 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: (1.0817403s)
	W0516 22:58:41.521161    6796 network_create.go:107] failed to create docker network old-k8s-version-20220516225533-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:58:41.537860    6796 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000063d0] amended:true}} dirty:map[192.168.49.0:0xc0000063d0 192.168.58.0:0xc000610280 192.168.67.0:0xc0006103b8] misses:2}
	I0516 22:58:41.537860    6796 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:58:41.552706    6796 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000063d0] amended:true}} dirty:map[192.168.49.0:0xc0000063d0 192.168.58.0:0xc000610280 192.168.67.0:0xc0006103b8 192.168.76.0:0xc000610508] misses:2}
	I0516 22:58:41.552755    6796 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:58:41.552755    6796 network_create.go:115] attempt to create docker network old-k8s-version-20220516225533-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:58:41.564927    6796 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444
	W0516 22:58:42.640681    6796 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:58:42.640681    6796 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: (1.0757454s)
	E0516 22:58:42.640681    6796 network_create.go:104] error while trying to create docker network old-k8s-version-20220516225533-2444 192.168.76.0/24: create docker network old-k8s-version-20220516225533-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 59c4a3130b2cad8de09f3e3856641fe0287f2f65d90053b9d203b7aeeb5b9df4 (br-59c4a3130b2c): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:58:42.640681    6796 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220516225533-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 59c4a3130b2cad8de09f3e3856641fe0287f2f65d90053b9d203b7aeeb5b9df4 (br-59c4a3130b2c): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220516225533-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 59c4a3130b2cad8de09f3e3856641fe0287f2f65d90053b9d203b7aeeb5b9df4 (br-59c4a3130b2c): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:58:42.656684    6796 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:58:43.759489    6796 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1027956s)
	I0516 22:58:43.771930    6796 cli_runner.go:164] Run: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:58:44.818696    6796 cli_runner.go:211] docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:58:44.818729    6796 cli_runner.go:217] Completed: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0465866s)
	I0516 22:58:44.818961    6796 client.go:171] LocalClient.Create took 9.9354933s
	I0516 22:58:46.840641    6796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:58:46.848732    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:58:47.869701    6796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:58:47.869767    6796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0207417s)
	I0516 22:58:47.869952    6796 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:48.050365    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:58:49.105145    6796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:58:49.105145    6796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0547717s)
	W0516 22:58:49.105605    6796 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	W0516 22:58:49.105699    6796 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:49.117672    6796 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:58:49.124219    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:58:50.205826    6796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:58:50.205826    6796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0815982s)
	I0516 22:58:50.205826    6796 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:50.420403    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:58:51.494469    6796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:58:51.494589    6796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0738663s)
	W0516 22:58:51.494661    6796 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	W0516 22:58:51.494661    6796 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:51.494661    6796 start.go:134] duration metric: createHost completed in 16.6181772s
	I0516 22:58:51.505362    6796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:58:51.512275    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:58:52.591568    6796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:58:52.591568    6796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0792839s)
	I0516 22:58:52.591568    6796 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:52.933252    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:58:54.022293    6796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:58:54.022293    6796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0890312s)
	W0516 22:58:54.022293    6796 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	W0516 22:58:54.022293    6796 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:54.034291    6796 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:58:54.042292    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:58:55.133232    6796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:58:55.133232    6796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0908042s)
	I0516 22:58:55.133232    6796 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:55.377896    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:58:56.478727    6796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:58:56.478870    6796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.1006059s)
	W0516 22:58:56.478898    6796 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	W0516 22:58:56.478898    6796 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:58:56.478898    6796 fix.go:57] fixHost completed within 53.5765287s
	I0516 22:58:56.478898    6796 start.go:81] releasing machines lock for "old-k8s-version-20220516225533-2444", held for 53.5767295s
	W0516 22:58:56.478898    6796 start.go:608] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220516225533-2444 container: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220516225533-2444: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220516225533-2444': mkdir /var/lib/docker/volumes/old-k8s-version-20220516225533-2444: read-only file system
	W0516 22:58:56.479503    6796 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220516225533-2444 container: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220516225533-2444: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220516225533-2444': mkdir /var/lib/docker/volumes/old-k8s-version-20220516225533-2444: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220516225533-2444 container: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220516225533-2444: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220516225533-2444': mkdir /var/lib/docker/volumes/old-k8s-version-20220516225533-2444: read-only file system
	
	I0516 22:58:56.479503    6796 start.go:623] Will try again in 5 seconds ...
	I0516 22:59:01.486563    6796 start.go:352] acquiring machines lock for old-k8s-version-20220516225533-2444: {Name:mk5023de8a7eabf3a3502247916ec67ae4aced29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:59:01.486563    6796 start.go:356] acquired machines lock for "old-k8s-version-20220516225533-2444" in 0s
	I0516 22:59:01.486563    6796 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:59:01.486563    6796 fix.go:55] fixHost starting: 
	I0516 22:59:01.505508    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:59:02.591314    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:02.591396    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0856861s)
	I0516 22:59:02.591522    6796 fix.go:103] recreateIfNeeded on old-k8s-version-20220516225533-2444: state= err=unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:02.591589    6796 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:59:02.613968    6796 out.go:177] * docker "old-k8s-version-20220516225533-2444" container is missing, will recreate.
	I0516 22:59:02.616254    6796 delete.go:124] DEMOLISHING old-k8s-version-20220516225533-2444 ...
	I0516 22:59:02.633628    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:59:03.721853    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:03.721853    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0882166s)
	W0516 22:59:03.721853    6796 stop.go:75] unable to get state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:03.721853    6796 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:03.741102    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:59:04.891828    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:04.892086    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.1507157s)
	I0516 22:59:04.892086    6796 delete.go:82] Unable to get host status for old-k8s-version-20220516225533-2444, assuming it has already been deleted: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:04.901044    6796 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444
	W0516 22:59:06.029136    6796 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:59:06.029212    6796 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444: (1.128022s)
	I0516 22:59:06.029250    6796 kic.go:356] could not find the container old-k8s-version-20220516225533-2444 to remove it. will try anyways
	I0516 22:59:06.036223    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:59:07.127432    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:07.127506    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0910922s)
	W0516 22:59:07.127639    6796 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:07.137396    6796 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-20220516225533-2444 /bin/bash -c "sudo init 0"
	W0516 22:59:08.188735    6796 cli_runner.go:211] docker exec --privileged -t old-k8s-version-20220516225533-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:59:08.188735    6796 cli_runner.go:217] Completed: docker exec --privileged -t old-k8s-version-20220516225533-2444 /bin/bash -c "sudo init 0": (1.0513295s)
	I0516 22:59:08.188735    6796 oci.go:641] error shutdown old-k8s-version-20220516225533-2444: docker exec --privileged -t old-k8s-version-20220516225533-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:09.207344    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:59:10.294215    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:10.294215    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0866317s)
	I0516 22:59:10.294215    6796 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:10.294215    6796 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:59:10.294215    6796 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:10.789716    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:59:11.858182    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:11.858182    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0683824s)
	I0516 22:59:11.858182    6796 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:11.858182    6796 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:59:11.858182    6796 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:12.466213    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:59:13.550485    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:13.550485    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0841249s)
	I0516 22:59:13.550485    6796 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:13.550485    6796 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:59:13.550485    6796 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:14.460906    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:59:15.523594    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:15.523594    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0626794s)
	I0516 22:59:15.523594    6796 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:15.523594    6796 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:59:15.523594    6796 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:17.525431    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:59:18.612770    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:18.612770    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0873296s)
	I0516 22:59:18.612770    6796 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:18.612770    6796 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:59:18.612770    6796 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:20.450905    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:59:21.494185    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:21.494185    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.0432704s)
	I0516 22:59:21.494185    6796 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:21.494185    6796 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:59:21.494185    6796 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:24.188914    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 22:59:25.294663    6796 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:25.294663    6796 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (1.1056913s)
	I0516 22:59:25.294663    6796 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:25.294663    6796 oci.go:655] temporary error: container old-k8s-version-20220516225533-2444 status is  but expect it to be exited
	I0516 22:59:25.294663    6796 oci.go:88] couldn't shut down old-k8s-version-20220516225533-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	 
	I0516 22:59:25.303657    6796 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-20220516225533-2444
	I0516 22:59:26.425540    6796 cli_runner.go:217] Completed: docker rm -f -v old-k8s-version-20220516225533-2444: (1.1218731s)
	I0516 22:59:26.434364    6796 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444
	W0516 22:59:27.558589    6796 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:59:27.558694    6796 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220516225533-2444: (1.1240788s)
	I0516 22:59:27.566776    6796 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:59:28.642432    6796 cli_runner.go:211] docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:59:28.642489    6796 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0755578s)
	I0516 22:59:28.651035    6796 network_create.go:272] running [docker network inspect old-k8s-version-20220516225533-2444] to gather additional debugging logs...
	I0516 22:59:28.651035    6796 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220516225533-2444
	W0516 22:59:29.806664    6796 cli_runner.go:211] docker network inspect old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:59:29.806740    6796 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220516225533-2444: (1.1555819s)
	I0516 22:59:29.806784    6796 network_create.go:275] error running [docker network inspect old-k8s-version-20220516225533-2444]: docker network inspect old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220516225533-2444
	I0516 22:59:29.806784    6796 network_create.go:277] output of [docker network inspect old-k8s-version-20220516225533-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220516225533-2444
	
	** /stderr **
	W0516 22:59:29.807933    6796 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:59:29.807983    6796 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:59:30.823669    6796 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:59:30.829572    6796 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:59:30.830053    6796 start.go:165] libmachine.API.Create for "old-k8s-version-20220516225533-2444" (driver="docker")
	I0516 22:59:30.830086    6796 client.go:168] LocalClient.Create starting
	I0516 22:59:30.830298    6796 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:59:30.830841    6796 main.go:134] libmachine: Decoding PEM data...
	I0516 22:59:30.830923    6796 main.go:134] libmachine: Parsing certificate...
	I0516 22:59:30.831046    6796 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:59:30.831046    6796 main.go:134] libmachine: Decoding PEM data...
	I0516 22:59:30.831046    6796 main.go:134] libmachine: Parsing certificate...
	I0516 22:59:30.839496    6796 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:59:31.944887    6796 cli_runner.go:211] docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:59:31.945012    6796 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220516225533-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.105217s)
	I0516 22:59:31.955592    6796 network_create.go:272] running [docker network inspect old-k8s-version-20220516225533-2444] to gather additional debugging logs...
	I0516 22:59:31.955592    6796 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220516225533-2444
	W0516 22:59:33.060753    6796 cli_runner.go:211] docker network inspect old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:59:33.060753    6796 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220516225533-2444: (1.1051514s)
	I0516 22:59:33.060753    6796 network_create.go:275] error running [docker network inspect old-k8s-version-20220516225533-2444]: docker network inspect old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220516225533-2444
	I0516 22:59:33.060753    6796 network_create.go:277] output of [docker network inspect old-k8s-version-20220516225533-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220516225533-2444
	
	** /stderr **
	I0516 22:59:33.070308    6796 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:59:34.187505    6796 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1170638s)
	I0516 22:59:34.208709    6796 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000063d0] amended:true}} dirty:map[192.168.49.0:0xc0000063d0 192.168.58.0:0xc000610280 192.168.67.0:0xc0006103b8 192.168.76.0:0xc000610508] misses:2}
	I0516 22:59:34.209561    6796 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:34.229348    6796 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000063d0] amended:true}} dirty:map[192.168.49.0:0xc0000063d0 192.168.58.0:0xc000610280 192.168.67.0:0xc0006103b8 192.168.76.0:0xc000610508] misses:3}
	I0516 22:59:34.229404    6796 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:34.245797    6796 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000063d0 192.168.58.0:0xc000610280 192.168.67.0:0xc0006103b8 192.168.76.0:0xc000610508] amended:false}} dirty:map[] misses:0}
	I0516 22:59:34.245829    6796 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:34.266466    6796 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000063d0 192.168.58.0:0xc000610280 192.168.67.0:0xc0006103b8 192.168.76.0:0xc000610508] amended:false}} dirty:map[] misses:0}
	I0516 22:59:34.267292    6796 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:34.294033    6796 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000063d0 192.168.58.0:0xc000610280 192.168.67.0:0xc0006103b8 192.168.76.0:0xc000610508] amended:true}} dirty:map[192.168.49.0:0xc0000063d0 192.168.58.0:0xc000610280 192.168.67.0:0xc0006103b8 192.168.76.0:0xc000610508 192.168.85.0:0xc000610418] misses:0}
	I0516 22:59:34.294033    6796 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:34.294033    6796 network_create.go:115] attempt to create docker network old-k8s-version-20220516225533-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:59:34.306840    6796 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444
	W0516 22:59:35.403755    6796 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:59:35.403881    6796 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: (1.096763s)
	E0516 22:59:35.403881    6796 network_create.go:104] error while trying to create docker network old-k8s-version-20220516225533-2444 192.168.85.0/24: create docker network old-k8s-version-20220516225533-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4d64c4c78645c74dc8ef243cdf03fcf41277ab8eca523379ac39eebe471ec4b2 (br-4d64c4c78645): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:59:35.403881    6796 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220516225533-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4d64c4c78645c74dc8ef243cdf03fcf41277ab8eca523379ac39eebe471ec4b2 (br-4d64c4c78645): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220516225533-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4d64c4c78645c74dc8ef243cdf03fcf41277ab8eca523379ac39eebe471ec4b2 (br-4d64c4c78645): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:59:35.421932    6796 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:59:36.512776    6796 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0907589s)
	I0516 22:59:36.521823    6796 cli_runner.go:164] Run: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:59:37.583453    6796 cli_runner.go:211] docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:59:37.583593    6796 cli_runner.go:217] Completed: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0614536s)
	I0516 22:59:37.583593    6796 client.go:171] LocalClient.Create took 6.7534498s
	I0516 22:59:39.597497    6796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:59:39.605454    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:59:40.730726    6796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:59:40.730759    6796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.1250698s)
	I0516 22:59:40.730877    6796 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:41.013237    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:59:42.095088    6796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:59:42.095270    6796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0818219s)
	W0516 22:59:42.095270    6796 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	W0516 22:59:42.095270    6796 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:42.108932    6796 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:59:42.117725    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:59:43.180868    6796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:59:43.180868    6796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0631333s)
	I0516 22:59:43.180868    6796 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:43.397347    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:59:44.495008    6796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:59:44.495062    6796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0974992s)
	W0516 22:59:44.495062    6796 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	W0516 22:59:44.495062    6796 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:44.495062    6796 start.go:134] duration metric: createHost completed in 13.6712763s
	I0516 22:59:44.505650    6796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:59:44.513456    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:59:45.625040    6796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:59:45.625040    6796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.1115751s)
	I0516 22:59:45.625040    6796 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:45.953141    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:59:47.081293    6796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:59:47.081293    6796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.1281426s)
	W0516 22:59:47.081293    6796 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	W0516 22:59:47.081293    6796 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:47.094955    6796 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:59:47.102097    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:59:48.214827    6796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:59:48.214827    6796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.1127208s)
	I0516 22:59:48.214827    6796 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:48.576088    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444
	W0516 22:59:49.670539    6796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444 returned with exit code 1
	I0516 22:59:49.670539    6796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: (1.0944418s)
	W0516 22:59:49.670539    6796 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	W0516 22:59:49.670539    6796 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220516225533-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220516225533-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	I0516 22:59:49.670539    6796 fix.go:57] fixHost completed within 48.183566s
	I0516 22:59:49.670539    6796 start.go:81] releasing machines lock for "old-k8s-version-20220516225533-2444", held for 48.183566s
	W0516 22:59:49.671252    6796 out.go:239] * Failed to start docker container. Running "minikube delete -p old-k8s-version-20220516225533-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220516225533-2444 container: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220516225533-2444: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220516225533-2444': mkdir /var/lib/docker/volumes/old-k8s-version-20220516225533-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-20220516225533-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220516225533-2444 container: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220516225533-2444: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220516225533-2444': mkdir /var/lib/docker/volumes/old-k8s-version-20220516225533-2444: read-only file system
	
	I0516 22:59:49.678390    6796 out.go:177] 
	W0516 22:59:49.680767    6796 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220516225533-2444 container: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220516225533-2444: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220516225533-2444': mkdir /var/lib/docker/volumes/old-k8s-version-20220516225533-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220516225533-2444 container: docker volume create old-k8s-version-20220516225533-2444 --label name.minikube.sigs.k8s.io=old-k8s-version-20220516225533-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220516225533-2444: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220516225533-2444': mkdir /var/lib/docker/volumes/old-k8s-version-20220516225533-2444: read-only file system
	
	W0516 22:59:49.681733    6796 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 22:59:49.681733    6796 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 22:59:49.684488    6796 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p old-k8s-version-20220516225533-2444 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220516225533-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220516225533-2444: exit status 1 (1.1691667s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444: exit status 7 (2.9211776s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:59:53.984320     740 status.go:247] status error: host: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220516225533-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (121.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220516225628-2444 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context embed-certs-20220516225628-2444 create -f testdata\busybox.yaml: exit status 1 (257.5291ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220516225628-2444" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context embed-certs-20220516225628-2444 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220516225628-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220516225628-2444: exit status 1 (1.1080659s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444: exit status 7 (2.8578616s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:57:57.747036    8268 status.go:247] status error: host: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220516225628-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220516225628-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220516225628-2444: exit status 1 (1.0685073s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444: exit status 7 (2.9259744s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:58:01.751004    6520 status.go:247] status error: host: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220516225628-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (7.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220516225628-2444 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220516225628-2444 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.9207622s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220516225628-2444 describe deploy/metrics-server -n kube-system

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context embed-certs-20220516225628-2444 describe deploy/metrics-server -n kube-system: exit status 1 (246.7043ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220516225628-2444" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-20220516225628-2444 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220516225628-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220516225628-2444: exit status 1 (1.1314614s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444: exit status 7 (2.9164603s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:58:08.997964    3912 status.go:247] status error: host: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220516225628-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (7.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444: exit status 7 (2.933412s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:58:07.794467    8592 status.go:247] status error: host: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220516225557-2444 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220516225557-2444 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.8916162s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220516225557-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220516225557-2444: exit status 1 (1.1606214s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444: exit status 7 (2.9465406s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:58:14.808468    3076 status.go:247] status error: host: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220516225557-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (9.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (26.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-20220516225628-2444 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p embed-certs-20220516225628-2444 --alsologtostderr -v=3: exit status 82 (22.5418126s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-20220516225628-2444"  ...
	* Stopping node "embed-certs-20220516225628-2444"  ...
	* Stopping node "embed-certs-20220516225628-2444"  ...
	* Stopping node "embed-certs-20220516225628-2444"  ...
	* Stopping node "embed-certs-20220516225628-2444"  ...
	* Stopping node "embed-certs-20220516225628-2444"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:58:09.244965    9084 out.go:296] Setting OutFile to fd 1984 ...
	I0516 22:58:09.307542    9084 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:58:09.307542    9084 out.go:309] Setting ErrFile to fd 1476...
	I0516 22:58:09.307542    9084 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:58:09.319037    9084 out.go:303] Setting JSON to false
	I0516 22:58:09.319894    9084 daemonize_windows.go:44] trying to kill existing schedule stop for profile embed-certs-20220516225628-2444...
	I0516 22:58:09.332751    9084 ssh_runner.go:195] Run: systemctl --version
	I0516 22:58:09.341864    9084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:58:11.874572    9084 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:58:11.874664    9084 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (2.5317442s)
	I0516 22:58:11.896128    9084 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0516 22:58:11.906525    9084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:58:12.981415    9084 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:58:12.981600    9084 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0746919s)
	I0516 22:58:12.981764    9084 retry.go:31] will retry after 360.127272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:13.361738    9084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:58:14.413197    9084 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:58:14.413197    9084 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0514503s)
	I0516 22:58:14.413197    9084 openrc.go:165] stop output: 
	E0516 22:58:14.413197    9084 daemonize_windows.go:38] error terminating scheduled stop for profile embed-certs-20220516225628-2444: stopping schedule-stop service for profile embed-certs-20220516225628-2444: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:14.413197    9084 mustload.go:65] Loading cluster: embed-certs-20220516225628-2444
	I0516 22:58:14.413765    9084 config.go:178] Loaded profile config "embed-certs-20220516225628-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:58:14.413765    9084 stop.go:39] StopHost: embed-certs-20220516225628-2444
	I0516 22:58:14.417756    9084 out.go:177] * Stopping node "embed-certs-20220516225628-2444"  ...
	I0516 22:58:14.435757    9084 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:58:15.498860    9084 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:15.498860    9084 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0630937s)
	W0516 22:58:15.498860    9084 stop.go:75] unable to get state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	W0516 22:58:15.498860    9084 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:15.498860    9084 retry.go:31] will retry after 937.714187ms: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:16.445482    9084 stop.go:39] StopHost: embed-certs-20220516225628-2444
	I0516 22:58:16.451791    9084 out.go:177] * Stopping node "embed-certs-20220516225628-2444"  ...
	I0516 22:58:16.470058    9084 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:58:17.521788    9084 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:17.521788    9084 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0516861s)
	W0516 22:58:17.521788    9084 stop.go:75] unable to get state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	W0516 22:58:17.521788    9084 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:17.521788    9084 retry.go:31] will retry after 1.386956246s: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:18.918873    9084 stop.go:39] StopHost: embed-certs-20220516225628-2444
	I0516 22:58:18.927872    9084 out.go:177] * Stopping node "embed-certs-20220516225628-2444"  ...
	I0516 22:58:18.946532    9084 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:58:20.020597    9084 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:20.020597    9084 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.074056s)
	W0516 22:58:20.020597    9084 stop.go:75] unable to get state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	W0516 22:58:20.020597    9084 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:20.020597    9084 retry.go:31] will retry after 2.670351914s: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:22.692663    9084 stop.go:39] StopHost: embed-certs-20220516225628-2444
	I0516 22:58:22.697250    9084 out.go:177] * Stopping node "embed-certs-20220516225628-2444"  ...
	I0516 22:58:22.711558    9084 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:58:23.822897    9084 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:23.823072    9084 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.1112061s)
	W0516 22:58:23.823190    9084 stop.go:75] unable to get state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	W0516 22:58:23.823260    9084 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:23.823323    9084 retry.go:31] will retry after 1.909024939s: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:25.739744    9084 stop.go:39] StopHost: embed-certs-20220516225628-2444
	I0516 22:58:25.744987    9084 out.go:177] * Stopping node "embed-certs-20220516225628-2444"  ...
	I0516 22:58:25.760801    9084 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:58:26.803306    9084 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:26.803306    9084 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0424968s)
	W0516 22:58:26.803306    9084 stop.go:75] unable to get state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	W0516 22:58:26.803602    9084 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:26.803602    9084 retry.go:31] will retry after 3.323628727s: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:30.131634    9084 stop.go:39] StopHost: embed-certs-20220516225628-2444
	I0516 22:58:30.136856    9084 out.go:177] * Stopping node "embed-certs-20220516225628-2444"  ...
	I0516 22:58:30.157346    9084 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:58:31.257921    9084 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:31.257921    9084 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.1005652s)
	W0516 22:58:31.257921    9084 stop.go:75] unable to get state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	W0516 22:58:31.257921    9084 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:31.261935    9084 out.go:177] 
	W0516 22:58:31.265204    9084 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect embed-certs-20220516225628-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect embed-certs-20220516225628-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	W0516 22:58:31.265204    9084 out.go:239] * 
	* 
	W0516 22:58:31.510855    9084 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_39.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_39.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0516 22:58:31.514876    9084 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p embed-certs-20220516225628-2444 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220516225628-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220516225628-2444: exit status 1 (1.1834624s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444: exit status 7 (2.9263942s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:58:35.649349    9208 status.go:247] status error: host: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220516225628-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (26.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (121.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220516225557-2444 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-20220516225557-2444 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m57.4027494s)

                                                
                                                
-- stdout --
	* [no-preload-20220516225557-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node no-preload-20220516225557-2444 in cluster no-preload-20220516225557-2444
	* Pulling base image ...
	* docker "no-preload-20220516225557-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "no-preload-20220516225557-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:58:15.065969    4512 out.go:296] Setting OutFile to fd 1492 ...
	I0516 22:58:15.122558    4512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:58:15.122558    4512 out.go:309] Setting ErrFile to fd 1608...
	I0516 22:58:15.123555    4512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:58:15.147715    4512 out.go:303] Setting JSON to false
	I0516 22:58:15.150863    4512 start.go:115] hostinfo: {"hostname":"minikube2","uptime":5007,"bootTime":1652736888,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:58:15.150863    4512 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:58:15.155120    4512 out.go:177] * [no-preload-20220516225557-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:58:15.158613    4512 notify.go:193] Checking for updates...
	I0516 22:58:15.160608    4512 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:58:15.162591    4512 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:58:15.165145    4512 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:58:15.167147    4512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:58:15.170156    4512 config.go:178] Loaded profile config "no-preload-20220516225557-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:58:15.171157    4512 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:58:17.819230    4512 docker.go:137] docker version: linux-20.10.14
	I0516 22:58:17.829455    4512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:58:19.846034    4512 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0165617s)
	I0516 22:58:19.846034    4512 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:58:18.8128379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:58:19.851071    4512 out.go:177] * Using the docker driver based on existing profile
	I0516 22:58:19.853386    4512 start.go:284] selected driver: docker
	I0516 22:58:19.853428    4512 start.go:806] validating driver "docker" against &{Name:no-preload-20220516225557-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220516225557-2444 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:58:19.853691    4512 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:58:19.936612    4512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:58:21.979334    4512 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0427046s)
	I0516 22:58:21.979334    4512 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:58:20.9470751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:58:21.980173    4512 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 22:58:21.980202    4512 cni.go:95] Creating CNI manager for ""
	I0516 22:58:21.980202    4512 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 22:58:21.980249    4512 start_flags.go:306] config:
	{Name:no-preload-20220516225557-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220516225557-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.min
ikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:58:21.983811    4512 out.go:177] * Starting control plane node no-preload-20220516225557-2444 in cluster no-preload-20220516225557-2444
	I0516 22:58:21.987030    4512 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:58:21.989747    4512 out.go:177] * Pulling base image ...
	I0516 22:58:21.995729    4512 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 22:58:21.995729    4512 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:58:21.995729    4512 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-20220516225557-2444\config.json ...
	I0516 22:58:21.995729    4512 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0516 22:58:21.996173    4512 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6
	I0516 22:58:21.996228    4512 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd:3.5.1-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0
	I0516 22:58:21.996228    4512 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6
	I0516 22:58:21.996228    4512 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6
	I0516 22:58:21.996228    4512 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause:3.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6
	I0516 22:58:21.996228    4512 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns:v1.8.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6
	I0516 22:58:21.996173    4512 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6
	I0516 22:58:22.169717    4512 cache.go:107] acquiring lock: {Name:mk3772b9dcb36c3cbc3aa4dfbe66c5266092e2c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:58:22.169717    4512 cache.go:107] acquiring lock: {Name:mk93ccdec90972c05247bea23df9b97c54ef0291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:58:22.169717    4512 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0516 22:58:22.169717    4512 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0 exists
	I0516 22:58:22.169717    4512 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 173.5432ms
	I0516 22:58:22.169717    4512 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0516 22:58:22.169717    4512 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\etcd_3.5.1-0" took 173.488ms
	I0516 22:58:22.169717    4512 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0 succeeded
	I0516 22:58:22.182729    4512 cache.go:107] acquiring lock: {Name:mk1cf2f2eee53b81f1c95945c2dd3783d0c7d992 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:58:22.182729    4512 cache.go:107] acquiring lock: {Name:mk90a34f529b9ea089d74e18a271c58e34606f29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:58:22.182729    4512 cache.go:107] acquiring lock: {Name:mk9255ee8c390126b963cceac501a1fcc40ecb6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:58:22.182729    4512 cache.go:107] acquiring lock: {Name:mka0a7f9fce0e132e7529c42bed359c919fc231b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:58:22.182729    4512 cache.go:107] acquiring lock: {Name:mkb7d2f7b32c5276784ba454e50c746d7fc6c05f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:58:22.182729    4512 cache.go:107] acquiring lock: {Name:mk40b809628c4e9673e2a41bf9fb31b8a6b3529d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:58:22.182729    4512 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6 exists
	I0516 22:58:22.182729    4512 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6 exists
	I0516 22:58:22.182729    4512 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6 exists
	I0516 22:58:22.182729    4512 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6 exists
	I0516 22:58:22.182729    4512 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-scheduler_v1.23.6" took 186.4998ms
	I0516 22:58:22.182729    4512 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6 succeeded
	I0516 22:58:22.182729    4512 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6 exists
	I0516 22:58:22.182729    4512 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\pause_3.6" took 186.4998ms
	I0516 22:58:22.182729    4512 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-proxy_v1.23.6" took 186.4998ms
	I0516 22:58:22.182729    4512 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-apiserver_v1.23.6" took 186.4998ms
	I0516 22:58:22.182729    4512 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6 succeeded
	I0516 22:58:22.182729    4512 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\coredns\\coredns_v1.8.6" took 186.4998ms
	I0516 22:58:22.182729    4512 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6 succeeded
	I0516 22:58:22.182729    4512 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6 succeeded
	I0516 22:58:22.182729    4512 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6 exists
	I0516 22:58:22.182729    4512 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6 succeeded
	I0516 22:58:22.182729    4512 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-controller-manager_v1.23.6" took 186.4998ms
	I0516 22:58:22.182729    4512 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6 succeeded
	I0516 22:58:22.182729    4512 cache.go:87] Successfully saved all images to host disk.
	I0516 22:58:23.084261    4512 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:58:23.084572    4512 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:58:23.084902    4512 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:58:23.084982    4512 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:58:23.085100    4512 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:58:23.085100    4512 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:58:23.085100    4512 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:58:23.085100    4512 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:58:23.085100    4512 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:58:25.346615    4512 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:58:25.346690    4512 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:58:25.346763    4512 start.go:352] acquiring machines lock for no-preload-20220516225557-2444: {Name:mkb26cae446bfb2d0e92a0ecbe26357c6ab2d349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:58:25.347076    4512 start.go:356] acquired machines lock for "no-preload-20220516225557-2444" in 226.4µs
	I0516 22:58:25.347076    4512 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:58:25.347076    4512 fix.go:55] fixHost starting: 
	I0516 22:58:25.366041    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:58:26.423847    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:26.423874    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0577203s)
	I0516 22:58:26.423968    4512 fix.go:103] recreateIfNeeded on no-preload-20220516225557-2444: state= err=unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:26.423968    4512 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:58:26.427815    4512 out.go:177] * docker "no-preload-20220516225557-2444" container is missing, will recreate.
	I0516 22:58:26.429635    4512 delete.go:124] DEMOLISHING no-preload-20220516225557-2444 ...
	I0516 22:58:26.443607    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:58:27.501659    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:27.501749    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.057781s)
	W0516 22:58:27.501749    4512 stop.go:75] unable to get state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:27.501749    4512 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:27.518226    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:58:28.531050    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:28.531050    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0128152s)
	I0516 22:58:28.531050    4512 delete.go:82] Unable to get host status for no-preload-20220516225557-2444, assuming it has already been deleted: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:28.542139    4512 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220516225557-2444
	W0516 22:58:29.587808    4512 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:58:29.587808    4512 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220516225557-2444: (1.0456294s)
	I0516 22:58:29.587808    4512 kic.go:356] could not find the container no-preload-20220516225557-2444 to remove it. will try anyways
	I0516 22:58:29.596521    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:58:30.661479    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:30.661479    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0649483s)
	W0516 22:58:30.661479    4512 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:30.668498    4512 cli_runner.go:164] Run: docker exec --privileged -t no-preload-20220516225557-2444 /bin/bash -c "sudo init 0"
	W0516 22:58:31.764990    4512 cli_runner.go:211] docker exec --privileged -t no-preload-20220516225557-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:58:31.764990    4512 cli_runner.go:217] Completed: docker exec --privileged -t no-preload-20220516225557-2444 /bin/bash -c "sudo init 0": (1.096483s)
	I0516 22:58:31.764990    4512 oci.go:641] error shutdown no-preload-20220516225557-2444: docker exec --privileged -t no-preload-20220516225557-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:32.777236    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:58:33.886788    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:33.886788    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.1095425s)
	I0516 22:58:33.886788    4512 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:33.886788    4512 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:58:33.886788    4512 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:34.455233    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:58:35.492404    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:35.492404    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0370002s)
	I0516 22:58:35.492473    4512 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:35.492473    4512 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:58:35.492473    4512 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:36.590825    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:58:37.688625    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:37.688625    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0976711s)
	I0516 22:58:37.688625    4512 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:37.688625    4512 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:58:37.688625    4512 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:39.016039    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:58:40.086164    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:40.086164    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0701159s)
	I0516 22:58:40.086164    4512 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:40.086164    4512 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:58:40.086164    4512 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:41.684302    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:58:42.750784    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:42.750825    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0663974s)
	I0516 22:58:42.750825    4512 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:42.750825    4512 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:58:42.750825    4512 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:45.115509    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:58:46.186218    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:46.186218    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0696559s)
	I0516 22:58:46.186218    4512 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:46.186218    4512 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:58:46.186218    4512 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:50.715612    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:58:51.787673    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:51.787707    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0719007s)
	I0516 22:58:51.787824    4512 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:58:51.787857    4512 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:58:51.787894    4512 oci.go:88] couldn't shut down no-preload-20220516225557-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	 
	I0516 22:58:51.796130    4512 cli_runner.go:164] Run: docker rm -f -v no-preload-20220516225557-2444
	I0516 22:58:52.906308    4512 cli_runner.go:217] Completed: docker rm -f -v no-preload-20220516225557-2444: (1.1101683s)
	I0516 22:58:52.915319    4512 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220516225557-2444
	W0516 22:58:54.006292    4512 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:58:54.006292    4512 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220516225557-2444: (1.0909633s)
	I0516 22:58:54.013305    4512 cli_runner.go:164] Run: docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:58:55.101952    4512 cli_runner.go:211] docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:58:55.102039    4512 cli_runner.go:217] Completed: docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0884518s)
	I0516 22:58:55.111901    4512 network_create.go:272] running [docker network inspect no-preload-20220516225557-2444] to gather additional debugging logs...
	I0516 22:58:55.111901    4512 cli_runner.go:164] Run: docker network inspect no-preload-20220516225557-2444
	W0516 22:58:56.183478    4512 cli_runner.go:211] docker network inspect no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:58:56.183478    4512 cli_runner.go:217] Completed: docker network inspect no-preload-20220516225557-2444: (1.0715674s)
	I0516 22:58:56.183478    4512 network_create.go:275] error running [docker network inspect no-preload-20220516225557-2444]: docker network inspect no-preload-20220516225557-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220516225557-2444
	I0516 22:58:56.183478    4512 network_create.go:277] output of [docker network inspect no-preload-20220516225557-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220516225557-2444
	
	** /stderr **
	W0516 22:58:56.184843    4512 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:58:56.184843    4512 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:58:57.197606    4512 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:58:57.201110    4512 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:58:57.201834    4512 start.go:165] libmachine.API.Create for "no-preload-20220516225557-2444" (driver="docker")
	I0516 22:58:57.201940    4512 client.go:168] LocalClient.Create starting
	I0516 22:58:57.202357    4512 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:58:57.202357    4512 main.go:134] libmachine: Decoding PEM data...
	I0516 22:58:57.202357    4512 main.go:134] libmachine: Parsing certificate...
	I0516 22:58:57.203067    4512 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:58:57.203441    4512 main.go:134] libmachine: Decoding PEM data...
	I0516 22:58:57.203441    4512 main.go:134] libmachine: Parsing certificate...
	I0516 22:58:57.214711    4512 cli_runner.go:164] Run: docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:58:58.285795    4512 cli_runner.go:211] docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:58:58.285795    4512 cli_runner.go:217] Completed: docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0710744s)
	I0516 22:58:58.295403    4512 network_create.go:272] running [docker network inspect no-preload-20220516225557-2444] to gather additional debugging logs...
	I0516 22:58:58.295403    4512 cli_runner.go:164] Run: docker network inspect no-preload-20220516225557-2444
	W0516 22:58:59.379700    4512 cli_runner.go:211] docker network inspect no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:58:59.379700    4512 cli_runner.go:217] Completed: docker network inspect no-preload-20220516225557-2444: (1.084189s)
	I0516 22:58:59.379700    4512 network_create.go:275] error running [docker network inspect no-preload-20220516225557-2444]: docker network inspect no-preload-20220516225557-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220516225557-2444
	I0516 22:58:59.379700    4512 network_create.go:277] output of [docker network inspect no-preload-20220516225557-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220516225557-2444
	
	** /stderr **
	I0516 22:58:59.388684    4512 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:59:00.426407    4512 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0377138s)
	I0516 22:59:00.441715    4512 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000c9c140] misses:0}
	I0516 22:59:00.441715    4512 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:00.441715    4512 network_create.go:115] attempt to create docker network no-preload-20220516225557-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:59:00.450752    4512 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444
	W0516 22:59:01.502562    4512 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:01.502562    4512 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: (1.0518008s)
	W0516 22:59:01.502562    4512 network_create.go:107] failed to create docker network no-preload-20220516225557-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:59:01.523522    4512 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c9c140] amended:false}} dirty:map[] misses:0}
	I0516 22:59:01.523522    4512 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:01.540517    4512 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c9c140] amended:true}} dirty:map[192.168.49.0:0xc000c9c140 192.168.58.0:0xc000c9c1d8] misses:0}
	I0516 22:59:01.540517    4512 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:01.540517    4512 network_create.go:115] attempt to create docker network no-preload-20220516225557-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:59:01.548505    4512 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444
	W0516 22:59:02.606761    4512 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:02.606761    4512 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: (1.0580939s)
	W0516 22:59:02.606761    4512 network_create.go:107] failed to create docker network no-preload-20220516225557-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:59:02.626374    4512 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c9c140] amended:true}} dirty:map[192.168.49.0:0xc000c9c140 192.168.58.0:0xc000c9c1d8] misses:1}
	I0516 22:59:02.626374    4512 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:02.642015    4512 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c9c140] amended:true}} dirty:map[192.168.49.0:0xc000c9c140 192.168.58.0:0xc000c9c1d8 192.168.67.0:0xc0012442d0] misses:1}
	I0516 22:59:02.642015    4512 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:02.642015    4512 network_create.go:115] attempt to create docker network no-preload-20220516225557-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:59:02.649996    4512 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444
	W0516 22:59:03.721853    4512 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:03.721853    4512 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: (1.0717165s)
	W0516 22:59:03.721853    4512 network_create.go:107] failed to create docker network no-preload-20220516225557-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:59:03.739007    4512 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c9c140] amended:true}} dirty:map[192.168.49.0:0xc000c9c140 192.168.58.0:0xc000c9c1d8 192.168.67.0:0xc0012442d0] misses:2}
	I0516 22:59:03.739007    4512 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:03.755946    4512 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c9c140] amended:true}} dirty:map[192.168.49.0:0xc000c9c140 192.168.58.0:0xc000c9c1d8 192.168.67.0:0xc0012442d0 192.168.76.0:0xc000c9c270] misses:2}
	I0516 22:59:03.756008    4512 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:03.756008    4512 network_create.go:115] attempt to create docker network no-preload-20220516225557-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:59:03.766466    4512 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444
	W0516 22:59:04.907195    4512 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:04.907195    4512 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: (1.1407189s)
	E0516 22:59:04.907195    4512 network_create.go:104] error while trying to create docker network no-preload-20220516225557-2444 192.168.76.0/24: create docker network no-preload-20220516225557-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network acf486cf3b630eee00ebb4f778f97a5823832737f6b244b6a7616e6e9fe845db (br-acf486cf3b63): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:59:04.907195    4512 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220516225557-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network acf486cf3b630eee00ebb4f778f97a5823832737f6b244b6a7616e6e9fe845db (br-acf486cf3b63): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220516225557-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network acf486cf3b630eee00ebb4f778f97a5823832737f6b244b6a7616e6e9fe845db (br-acf486cf3b63): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:59:04.925579    4512 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:59:06.013594    4512 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0879565s)
	I0516 22:59:06.025045    4512 cli_runner.go:164] Run: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:59:07.158421    4512 cli_runner.go:211] docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:59:07.158421    4512 cli_runner.go:217] Completed: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: (1.1333665s)
	I0516 22:59:07.158421    4512 client.go:171] LocalClient.Create took 9.9563965s
	I0516 22:59:09.182798    4512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:59:09.193261    4512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:59:10.278728    4512 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:10.278798    4512 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0851865s)
	I0516 22:59:10.279017    4512 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:10.461895    4512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:59:11.591847    4512 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:11.591896    4512 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.129731s)
	W0516 22:59:11.592252    4512 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	W0516 22:59:11.592338    4512 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:11.604001    4512 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:59:11.610704    4512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:59:12.694312    4512 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:12.694312    4512 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0835984s)
	I0516 22:59:12.694312    4512 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:12.927736    4512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:59:13.991827    4512 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:13.991882    4512 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0633393s)
	W0516 22:59:13.991882    4512 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	W0516 22:59:13.991882    4512 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:13.991882    4512 start.go:134] duration metric: createHost completed in 16.7939416s
	I0516 22:59:14.005037    4512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:59:14.012426    4512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:59:15.082287    4512 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:15.082287    4512 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0698526s)
	I0516 22:59:15.082287    4512 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:15.421377    4512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:59:16.502853    4512 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:16.502853    4512 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0814664s)
	W0516 22:59:16.502853    4512 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	W0516 22:59:16.502853    4512 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:16.513899    4512 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:59:16.521901    4512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:59:17.564940    4512 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:17.564940    4512 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0430301s)
	I0516 22:59:17.564940    4512 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:17.793493    4512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 22:59:18.868433    4512 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:18.868601    4512 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.074931s)
	W0516 22:59:18.868902    4512 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	W0516 22:59:18.868902    4512 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:18.868969    4512 fix.go:57] fixHost completed within 53.5214384s
	I0516 22:59:18.868969    4512 start.go:81] releasing machines lock for "no-preload-20220516225557-2444", held for 53.5214384s
	W0516 22:59:18.869225    4512 start.go:608] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220516225557-2444 container: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220516225557-2444: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220516225557-2444': mkdir /var/lib/docker/volumes/no-preload-20220516225557-2444: read-only file system
	W0516 22:59:18.869686    4512 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220516225557-2444 container: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220516225557-2444: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220516225557-2444': mkdir /var/lib/docker/volumes/no-preload-20220516225557-2444: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220516225557-2444 container: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220516225557-2444: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220516225557-2444': mkdir /var/lib/docker/volumes/no-preload-20220516225557-2444: read-only file system
	
	I0516 22:59:18.869686    4512 start.go:623] Will try again in 5 seconds ...
	I0516 22:59:23.873118    4512 start.go:352] acquiring machines lock for no-preload-20220516225557-2444: {Name:mkb26cae446bfb2d0e92a0ecbe26357c6ab2d349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:59:23.873517    4512 start.go:356] acquired machines lock for "no-preload-20220516225557-2444" in 240.6µs
	I0516 22:59:23.873690    4512 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:59:23.873739    4512 fix.go:55] fixHost starting: 
	I0516 22:59:23.885313    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:59:25.010833    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:25.010833    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.1254474s)
	I0516 22:59:25.010833    4512 fix.go:103] recreateIfNeeded on no-preload-20220516225557-2444: state= err=unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:25.010833    4512 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:59:25.016218    4512 out.go:177] * docker "no-preload-20220516225557-2444" container is missing, will recreate.
	I0516 22:59:25.018459    4512 delete.go:124] DEMOLISHING no-preload-20220516225557-2444 ...
	I0516 22:59:25.033715    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:59:26.160920    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:26.160920    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.1271958s)
	W0516 22:59:26.160920    4512 stop.go:75] unable to get state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:26.160920    4512 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:26.181815    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:59:27.305805    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:27.305805    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.1239803s)
	I0516 22:59:27.305805    4512 delete.go:82] Unable to get host status for no-preload-20220516225557-2444, assuming it has already been deleted: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:27.314792    4512 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220516225557-2444
	W0516 22:59:28.362297    4512 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:28.362297    4512 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220516225557-2444: (1.0474955s)
	I0516 22:59:28.362297    4512 kic.go:356] could not find the container no-preload-20220516225557-2444 to remove it. will try anyways
	I0516 22:59:28.370025    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:59:29.467374    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:29.467519    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0973398s)
	W0516 22:59:29.467519    4512 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:29.476368    4512 cli_runner.go:164] Run: docker exec --privileged -t no-preload-20220516225557-2444 /bin/bash -c "sudo init 0"
	W0516 22:59:30.577405    4512 cli_runner.go:211] docker exec --privileged -t no-preload-20220516225557-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:59:30.577405    4512 cli_runner.go:217] Completed: docker exec --privileged -t no-preload-20220516225557-2444 /bin/bash -c "sudo init 0": (1.101028s)
	I0516 22:59:30.577405    4512 oci.go:641] error shutdown no-preload-20220516225557-2444: docker exec --privileged -t no-preload-20220516225557-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:31.593175    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:59:32.712932    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:32.713079    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.1197472s)
	I0516 22:59:32.713199    4512 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:32.713199    4512 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:59:32.713255    4512 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:33.209940    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:59:34.342670    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:34.342727    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.1326718s)
	I0516 22:59:34.342758    4512 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:34.342821    4512 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:59:34.342859    4512 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:34.951647    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:59:36.034704    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:36.034954    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0829762s)
	I0516 22:59:36.035055    4512 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:36.035091    4512 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:59:36.035091    4512 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:36.938519    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:59:38.023335    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:38.023335    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0841897s)
	I0516 22:59:38.023335    4512 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:38.023335    4512 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:59:38.023335    4512 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:40.033625    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:59:41.113286    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:41.113286    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0786043s)
	I0516 22:59:41.113286    4512 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:41.113286    4512 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:59:41.113286    4512 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:42.955565    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:59:44.037873    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:44.037873    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.0822987s)
	I0516 22:59:44.037873    4512 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:44.037873    4512 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:59:44.037873    4512 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:46.731152    4512 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 22:59:47.838431    4512 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:47.838653    4512 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (1.1062473s)
	I0516 22:59:47.838777    4512 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 22:59:47.838777    4512 oci.go:655] temporary error: container no-preload-20220516225557-2444 status is  but expect it to be exited
	I0516 22:59:47.838777    4512 oci.go:88] couldn't shut down no-preload-20220516225557-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	 
	I0516 22:59:47.847617    4512 cli_runner.go:164] Run: docker rm -f -v no-preload-20220516225557-2444
	I0516 22:59:48.990561    4512 cli_runner.go:217] Completed: docker rm -f -v no-preload-20220516225557-2444: (1.1428424s)
	I0516 22:59:48.998548    4512 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220516225557-2444
	W0516 22:59:50.110016    4512 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:50.110016    4512 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220516225557-2444: (1.1114592s)
	I0516 22:59:50.119372    4512 cli_runner.go:164] Run: docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:59:51.202379    4512 cli_runner.go:211] docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:59:51.202379    4512 cli_runner.go:217] Completed: docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0829975s)
	I0516 22:59:51.210295    4512 network_create.go:272] running [docker network inspect no-preload-20220516225557-2444] to gather additional debugging logs...
	I0516 22:59:51.210295    4512 cli_runner.go:164] Run: docker network inspect no-preload-20220516225557-2444
	W0516 22:59:52.305552    4512 cli_runner.go:211] docker network inspect no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:52.305552    4512 cli_runner.go:217] Completed: docker network inspect no-preload-20220516225557-2444: (1.0952476s)
	I0516 22:59:52.305552    4512 network_create.go:275] error running [docker network inspect no-preload-20220516225557-2444]: docker network inspect no-preload-20220516225557-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220516225557-2444
	I0516 22:59:52.305552    4512 network_create.go:277] output of [docker network inspect no-preload-20220516225557-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220516225557-2444
	
	** /stderr **
	W0516 22:59:52.306842    4512 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:59:52.306968    4512 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:59:53.319831    4512 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:59:53.323108    4512 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:59:53.323433    4512 start.go:165] libmachine.API.Create for "no-preload-20220516225557-2444" (driver="docker")
	I0516 22:59:53.323462    4512 client.go:168] LocalClient.Create starting
	I0516 22:59:53.323462    4512 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:59:53.324149    4512 main.go:134] libmachine: Decoding PEM data...
	I0516 22:59:53.324216    4512 main.go:134] libmachine: Parsing certificate...
	I0516 22:59:53.324268    4512 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:59:53.324268    4512 main.go:134] libmachine: Decoding PEM data...
	I0516 22:59:53.324268    4512 main.go:134] libmachine: Parsing certificate...
	I0516 22:59:53.334271    4512 cli_runner.go:164] Run: docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:59:54.437244    4512 cli_runner.go:211] docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:59:54.437279    4512 cli_runner.go:217] Completed: docker network inspect no-preload-20220516225557-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1027985s)
	I0516 22:59:54.446010    4512 network_create.go:272] running [docker network inspect no-preload-20220516225557-2444] to gather additional debugging logs...
	I0516 22:59:54.446010    4512 cli_runner.go:164] Run: docker network inspect no-preload-20220516225557-2444
	W0516 22:59:55.555135    4512 cli_runner.go:211] docker network inspect no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:55.555135    4512 cli_runner.go:217] Completed: docker network inspect no-preload-20220516225557-2444: (1.1091163s)
	I0516 22:59:55.555135    4512 network_create.go:275] error running [docker network inspect no-preload-20220516225557-2444]: docker network inspect no-preload-20220516225557-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220516225557-2444
	I0516 22:59:55.555135    4512 network_create.go:277] output of [docker network inspect no-preload-20220516225557-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220516225557-2444
	
	** /stderr **
	I0516 22:59:55.562971    4512 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:59:56.656714    4512 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0937338s)
	I0516 22:59:56.673785    4512 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c9c140] amended:true}} dirty:map[192.168.49.0:0xc000c9c140 192.168.58.0:0xc000c9c1d8 192.168.67.0:0xc0012442d0 192.168.76.0:0xc000c9c270] misses:2}
	I0516 22:59:56.673785    4512 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:56.689837    4512 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c9c140] amended:true}} dirty:map[192.168.49.0:0xc000c9c140 192.168.58.0:0xc000c9c1d8 192.168.67.0:0xc0012442d0 192.168.76.0:0xc000c9c270] misses:3}
	I0516 22:59:56.689837    4512 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:56.709142    4512 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c9c140 192.168.58.0:0xc000c9c1d8 192.168.67.0:0xc0012442d0 192.168.76.0:0xc000c9c270] amended:false}} dirty:map[] misses:0}
	I0516 22:59:56.709142    4512 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:56.724775    4512 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c9c140 192.168.58.0:0xc000c9c1d8 192.168.67.0:0xc0012442d0 192.168.76.0:0xc000c9c270] amended:false}} dirty:map[] misses:0}
	I0516 22:59:56.725373    4512 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:56.743372    4512 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c9c140 192.168.58.0:0xc000c9c1d8 192.168.67.0:0xc0012442d0 192.168.76.0:0xc000c9c270] amended:true}} dirty:map[192.168.49.0:0xc000c9c140 192.168.58.0:0xc000c9c1d8 192.168.67.0:0xc0012442d0 192.168.76.0:0xc000c9c270 192.168.85.0:0xc000d3e3a0] misses:0}
	I0516 22:59:56.743372    4512 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:56.743372    4512 network_create.go:115] attempt to create docker network no-preload-20220516225557-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 22:59:56.754175    4512 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444
	W0516 22:59:57.850178    4512 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444 returned with exit code 1
	I0516 22:59:57.850178    4512 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: (1.0959937s)
	E0516 22:59:57.850178    4512 network_create.go:104] error while trying to create docker network no-preload-20220516225557-2444 192.168.85.0/24: create docker network no-preload-20220516225557-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 03af8ac064c34d105f0b76673efd5cd8e4a7f92570338768ccf2bb1a6517c51e (br-03af8ac064c3): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 22:59:57.850178    4512 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220516225557-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 03af8ac064c34d105f0b76673efd5cd8e4a7f92570338768ccf2bb1a6517c51e (br-03af8ac064c3): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220516225557-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220516225557-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 03af8ac064c34d105f0b76673efd5cd8e4a7f92570338768ccf2bb1a6517c51e (br-03af8ac064c3): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 22:59:57.869063    4512 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:59:58.988805    4512 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1196531s)
	I0516 22:59:58.989313    4512 cli_runner.go:164] Run: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:00:00.186520    4512 cli_runner.go:211] docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:00:00.186520    4512 cli_runner.go:217] Completed: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: (1.1971966s)
	I0516 23:00:00.186520    4512 client.go:171] LocalClient.Create took 6.8629998s
	I0516 23:00:02.209805    4512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:00:02.218281    4512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 23:00:03.305274    4512 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 23:00:03.305274    4512 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0869832s)
	I0516 23:00:03.305274    4512 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 23:00:03.588766    4512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 23:00:04.692522    4512 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 23:00:04.692522    4512 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.1037469s)
	W0516 23:00:04.692522    4512 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	W0516 23:00:04.692522    4512 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 23:00:04.702533    4512 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:00:04.709513    4512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 23:00:05.796540    4512 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 23:00:05.796540    4512 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0870174s)
	I0516 23:00:05.796540    4512 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 23:00:06.007734    4512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 23:00:07.107408    4512 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 23:00:07.107495    4512 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0996236s)
	W0516 23:00:07.107758    4512 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	W0516 23:00:07.107794    4512 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 23:00:07.107833    4512 start.go:134] duration metric: createHost completed in 13.7877727s
	I0516 23:00:07.119837    4512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:00:07.126287    4512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 23:00:08.216546    4512 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 23:00:08.216546    4512 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0902495s)
	I0516 23:00:08.216546    4512 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 23:00:08.546238    4512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 23:00:09.612525    4512 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 23:00:09.612525    4512 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0662777s)
	W0516 23:00:09.612525    4512 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	W0516 23:00:09.612525    4512 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 23:00:09.624222    4512 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:00:09.631888    4512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 23:00:10.743048    4512 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 23:00:10.743048    4512 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.1111503s)
	I0516 23:00:10.743048    4512 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 23:00:11.103103    4512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444
	W0516 23:00:12.182654    4512 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444 returned with exit code 1
	I0516 23:00:12.182729    4512 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: (1.0795425s)
	W0516 23:00:12.182968    4512 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	W0516 23:00:12.182968    4512 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220516225557-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220516225557-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	I0516 23:00:12.183032    4512 fix.go:57] fixHost completed within 48.3088827s
	I0516 23:00:12.183032    4512 start.go:81] releasing machines lock for "no-preload-20220516225557-2444", held for 48.3090609s
	W0516 23:00:12.183593    4512 out.go:239] * Failed to start docker container. Running "minikube delete -p no-preload-20220516225557-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220516225557-2444 container: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220516225557-2444: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220516225557-2444': mkdir /var/lib/docker/volumes/no-preload-20220516225557-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p no-preload-20220516225557-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220516225557-2444 container: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220516225557-2444: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220516225557-2444': mkdir /var/lib/docker/volumes/no-preload-20220516225557-2444: read-only file system
	
	I0516 23:00:12.188376    4512 out.go:177] 
	W0516 23:00:12.190428    4512 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220516225557-2444 container: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220516225557-2444: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220516225557-2444': mkdir /var/lib/docker/volumes/no-preload-20220516225557-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220516225557-2444 container: docker volume create no-preload-20220516225557-2444 --label name.minikube.sigs.k8s.io=no-preload-20220516225557-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220516225557-2444: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220516225557-2444': mkdir /var/lib/docker/volumes/no-preload-20220516225557-2444: read-only file system
	
	W0516 23:00:12.190982    4512 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 23:00:12.191206    4512 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 23:00:12.194314    4512 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p no-preload-20220516225557-2444 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220516225557-2444

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220516225557-2444: exit status 1 (1.1735972s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444: exit status 7 (2.9920129s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:00:16.547082    9140 status.go:247] status error: host: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220516225557-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (121.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444: exit status 7 (2.9313593s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:58:38.579823    5812 status.go:247] status error: host: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220516225628-2444 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220516225628-2444 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.9402277s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220516225628-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220516225628-2444: exit status 1 (1.1246989s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444: exit status 7 (2.8419784s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:58:45.500251    6104 status.go:247] status error: host: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220516225628-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (9.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (122.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220516225628-2444 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p embed-certs-20220516225628-2444 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m57.8132745s)

                                                
                                                
-- stdout --
	* [embed-certs-20220516225628-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node embed-certs-20220516225628-2444 in cluster embed-certs-20220516225628-2444
	* Pulling base image ...
	* docker "embed-certs-20220516225628-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "embed-certs-20220516225628-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:58:45.765150    4236 out.go:296] Setting OutFile to fd 1404 ...
	I0516 22:58:45.841295    4236 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:58:45.841295    4236 out.go:309] Setting ErrFile to fd 1936...
	I0516 22:58:45.841295    4236 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:58:45.852154    4236 out.go:303] Setting JSON to false
	I0516 22:58:45.853591    4236 start.go:115] hostinfo: {"hostname":"minikube2","uptime":5038,"bootTime":1652736887,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:58:45.853591    4236 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:58:45.858729    4236 out.go:177] * [embed-certs-20220516225628-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:58:45.860828    4236 notify.go:193] Checking for updates...
	I0516 22:58:45.863797    4236 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:58:45.865939    4236 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:58:45.868375    4236 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:58:45.870765    4236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:58:45.873125    4236 config.go:178] Loaded profile config "embed-certs-20220516225628-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:58:45.874301    4236 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:58:48.479124    4236 docker.go:137] docker version: linux-20.10.14
	I0516 22:58:48.488333    4236 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:58:50.549164    4236 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0608143s)
	I0516 22:58:50.550085    4236 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:58:49.5005159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:58:50.553743    4236 out.go:177] * Using the docker driver based on existing profile
	I0516 22:58:50.556289    4236 start.go:284] selected driver: docker
	I0516 22:58:50.556289    4236 start.go:806] validating driver "docker" against &{Name:embed-certs-20220516225628-2444 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220516225628-2444 Namespace:default APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:58:50.556893    4236 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:58:50.635317    4236 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:58:52.748821    4236 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1134868s)
	I0516 22:58:52.748821    4236 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 22:58:51.6709234 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:58:52.748821    4236 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 22:58:52.748821    4236 cni.go:95] Creating CNI manager for ""
	I0516 22:58:52.748821    4236 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 22:58:52.748821    4236 start_flags.go:306] config:
	{Name:embed-certs-20220516225628-2444 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220516225628-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.mi
nikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:58:52.752824    4236 out.go:177] * Starting control plane node embed-certs-20220516225628-2444 in cluster embed-certs-20220516225628-2444
	I0516 22:58:52.755822    4236 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 22:58:52.757830    4236 out.go:177] * Pulling base image ...
	I0516 22:58:52.760842    4236 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 22:58:52.760842    4236 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 22:58:52.760842    4236 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 22:58:52.760842    4236 cache.go:57] Caching tarball of preloaded images
	I0516 22:58:52.761822    4236 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 22:58:52.761822    4236 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 22:58:52.761822    4236 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\embed-certs-20220516225628-2444\config.json ...
	I0516 22:58:53.880654    4236 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 22:58:53.880800    4236 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:58:53.881134    4236 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:58:53.881134    4236 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 22:58:53.881134    4236 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 22:58:53.881134    4236 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 22:58:53.881134    4236 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 22:58:53.881134    4236 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 22:58:53.881134    4236 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 22:58:56.195035    4236 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 22:58:56.195035    4236 cache.go:206] Successfully downloaded all kic artifacts
	I0516 22:58:56.195035    4236 start.go:352] acquiring machines lock for embed-certs-20220516225628-2444: {Name:mk313f3adfa614f48756e4c4bd1949083e33b93c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:58:56.196092    4236 start.go:356] acquired machines lock for "embed-certs-20220516225628-2444" in 1.057ms
	I0516 22:58:56.196092    4236 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:58:56.196092    4236 fix.go:55] fixHost starting: 
	I0516 22:58:56.215018    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:58:57.259161    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:57.259238    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0439308s)
	I0516 22:58:57.259338    4236 fix.go:103] recreateIfNeeded on embed-certs-20220516225628-2444: state= err=unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:57.259382    4236 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:58:57.263431    4236 out.go:177] * docker "embed-certs-20220516225628-2444" container is missing, will recreate.
	I0516 22:58:57.266088    4236 delete.go:124] DEMOLISHING embed-certs-20220516225628-2444 ...
	I0516 22:58:57.281300    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:58:58.301768    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:58.301768    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0193313s)
	W0516 22:58:58.301768    4236 stop.go:75] unable to get state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:58.301768    4236 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:58.320628    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:58:59.364606    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:58:59.364710    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0437958s)
	I0516 22:58:59.364736    4236 delete.go:82] Unable to get host status for embed-certs-20220516225628-2444, assuming it has already been deleted: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:58:59.373682    4236 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220516225628-2444
	W0516 22:59:00.441715    4236 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:00.441715    4236 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220516225628-2444: (1.0679807s)
	I0516 22:59:00.441715    4236 kic.go:356] could not find the container embed-certs-20220516225628-2444 to remove it. will try anyways
	I0516 22:59:00.441715    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:59:01.518565    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:01.518565    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0768404s)
	W0516 22:59:01.518565    4236 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:01.525520    4236 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-20220516225628-2444 /bin/bash -c "sudo init 0"
	W0516 22:59:02.575204    4236 cli_runner.go:211] docker exec --privileged -t embed-certs-20220516225628-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 22:59:02.575204    4236 cli_runner.go:217] Completed: docker exec --privileged -t embed-certs-20220516225628-2444 /bin/bash -c "sudo init 0": (1.049675s)
	I0516 22:59:02.575204    4236 oci.go:641] error shutdown embed-certs-20220516225628-2444: docker exec --privileged -t embed-certs-20220516225628-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:03.592304    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:59:04.738507    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:04.738507    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.1461932s)
	I0516 22:59:04.738791    4236 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:04.738791    4236 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 22:59:04.738791    4236 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:05.315078    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:59:06.443572    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:06.443750    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.1284839s)
	I0516 22:59:06.443750    4236 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:06.443750    4236 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 22:59:06.443750    4236 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:07.535670    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:59:08.586058    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:08.586126    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0502383s)
	I0516 22:59:08.586126    4236 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:08.586126    4236 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 22:59:08.586126    4236 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:09.905159    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:59:10.956728    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:10.956728    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0515601s)
	I0516 22:59:10.956728    4236 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:10.956728    4236 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 22:59:10.956728    4236 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:12.561784    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:59:13.674611    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:13.674670    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.1126666s)
	I0516 22:59:13.674670    4236 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:13.674670    4236 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 22:59:13.674670    4236 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:16.022830    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:59:17.136148    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:17.136148    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.1133089s)
	I0516 22:59:17.136148    4236 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:17.136148    4236 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 22:59:17.136148    4236 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:21.664312    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:59:22.703185    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:22.703263    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0386954s)
	I0516 22:59:22.703263    4236 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:22.703263    4236 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 22:59:22.703263    4236 oci.go:88] couldn't shut down embed-certs-20220516225628-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	 
	I0516 22:59:22.712391    4236 cli_runner.go:164] Run: docker rm -f -v embed-certs-20220516225628-2444
	I0516 22:59:23.779947    4236 cli_runner.go:217] Completed: docker rm -f -v embed-certs-20220516225628-2444: (1.0675468s)
	I0516 22:59:23.790932    4236 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220516225628-2444
	W0516 22:59:24.866392    4236 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:24.866392    4236 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220516225628-2444: (1.075451s)
	I0516 22:59:24.873391    4236 cli_runner.go:164] Run: docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:59:25.958753    4236 cli_runner.go:211] docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:59:25.958753    4236 cli_runner.go:217] Completed: docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0853537s)
	I0516 22:59:25.965383    4236 network_create.go:272] running [docker network inspect embed-certs-20220516225628-2444] to gather additional debugging logs...
	I0516 22:59:25.966378    4236 cli_runner.go:164] Run: docker network inspect embed-certs-20220516225628-2444
	W0516 22:59:27.086214    4236 cli_runner.go:211] docker network inspect embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:27.086350    4236 cli_runner.go:217] Completed: docker network inspect embed-certs-20220516225628-2444: (1.1198261s)
	I0516 22:59:27.086413    4236 network_create.go:275] error running [docker network inspect embed-certs-20220516225628-2444]: docker network inspect embed-certs-20220516225628-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220516225628-2444
	I0516 22:59:27.086540    4236 network_create.go:277] output of [docker network inspect embed-certs-20220516225628-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220516225628-2444
	
	** /stderr **
	W0516 22:59:27.087477    4236 delete.go:139] delete failed (probably ok) <nil>
	I0516 22:59:27.087477    4236 fix.go:115] Sleeping 1 second for extra luck!
	I0516 22:59:28.095083    4236 start.go:131] createHost starting for "" (driver="docker")
	I0516 22:59:28.099647    4236 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 22:59:28.099980    4236 start.go:165] libmachine.API.Create for "embed-certs-20220516225628-2444" (driver="docker")
	I0516 22:59:28.099980    4236 client.go:168] LocalClient.Create starting
	I0516 22:59:28.100646    4236 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 22:59:28.100869    4236 main.go:134] libmachine: Decoding PEM data...
	I0516 22:59:28.100869    4236 main.go:134] libmachine: Parsing certificate...
	I0516 22:59:28.100869    4236 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 22:59:28.100869    4236 main.go:134] libmachine: Decoding PEM data...
	I0516 22:59:28.100869    4236 main.go:134] libmachine: Parsing certificate...
	I0516 22:59:28.111862    4236 cli_runner.go:164] Run: docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 22:59:29.242335    4236 cli_runner.go:211] docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 22:59:29.242547    4236 cli_runner.go:217] Completed: docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1304209s)
	I0516 22:59:29.251651    4236 network_create.go:272] running [docker network inspect embed-certs-20220516225628-2444] to gather additional debugging logs...
	I0516 22:59:29.251651    4236 cli_runner.go:164] Run: docker network inspect embed-certs-20220516225628-2444
	W0516 22:59:30.360301    4236 cli_runner.go:211] docker network inspect embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:30.360441    4236 cli_runner.go:217] Completed: docker network inspect embed-certs-20220516225628-2444: (1.1086414s)
	I0516 22:59:30.360595    4236 network_create.go:275] error running [docker network inspect embed-certs-20220516225628-2444]: docker network inspect embed-certs-20220516225628-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220516225628-2444
	I0516 22:59:30.360623    4236 network_create.go:277] output of [docker network inspect embed-certs-20220516225628-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220516225628-2444
	
	** /stderr **
	I0516 22:59:30.368775    4236 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 22:59:31.440635    4236 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0718507s)
	I0516 22:59:31.459239    4236 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000de1e8] misses:0}
	I0516 22:59:31.459239    4236 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:31.459239    4236 network_create.go:115] attempt to create docker network embed-certs-20220516225628-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 22:59:31.469048    4236 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444
	W0516 22:59:32.558328    4236 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:32.558364    4236 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: (1.0889803s)
	W0516 22:59:32.558416    4236 network_create.go:107] failed to create docker network embed-certs-20220516225628-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 22:59:32.576722    4236 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000de1e8] amended:false}} dirty:map[] misses:0}
	I0516 22:59:32.576722    4236 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:32.592805    4236 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000de1e8] amended:true}} dirty:map[192.168.49.0:0xc0000de1e8 192.168.58.0:0xc0006c8ac0] misses:0}
	I0516 22:59:32.592805    4236 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:32.592805    4236 network_create.go:115] attempt to create docker network embed-certs-20220516225628-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 22:59:32.601756    4236 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444
	W0516 22:59:33.763584    4236 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:33.763751    4236 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: (1.1615599s)
	W0516 22:59:33.763751    4236 network_create.go:107] failed to create docker network embed-certs-20220516225628-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 22:59:33.781513    4236 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000de1e8] amended:true}} dirty:map[192.168.49.0:0xc0000de1e8 192.168.58.0:0xc0006c8ac0] misses:1}
	I0516 22:59:33.781513    4236 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:33.798117    4236 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000de1e8] amended:true}} dirty:map[192.168.49.0:0xc0000de1e8 192.168.58.0:0xc0006c8ac0 192.168.67.0:0xc0000de488] misses:1}
	I0516 22:59:33.798217    4236 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:33.798217    4236 network_create.go:115] attempt to create docker network embed-certs-20220516225628-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 22:59:33.806087    4236 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444
	W0516 22:59:34.880236    4236 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:34.880236    4236 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: (1.0741394s)
	W0516 22:59:34.880236    4236 network_create.go:107] failed to create docker network embed-certs-20220516225628-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 22:59:34.897907    4236 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000de1e8] amended:true}} dirty:map[192.168.49.0:0xc0000de1e8 192.168.58.0:0xc0006c8ac0 192.168.67.0:0xc0000de488] misses:2}
	I0516 22:59:34.898380    4236 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:34.913552    4236 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000de1e8] amended:true}} dirty:map[192.168.49.0:0xc0000de1e8 192.168.58.0:0xc0006c8ac0 192.168.67.0:0xc0000de488 192.168.76.0:0xc0006c8c18] misses:2}
	I0516 22:59:34.913552    4236 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 22:59:34.913552    4236 network_create.go:115] attempt to create docker network embed-certs-20220516225628-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 22:59:34.924157    4236 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444
	W0516 22:59:36.050081    4236 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:36.050412    4236 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: (1.125893s)
	E0516 22:59:36.050481    4236 network_create.go:104] error while trying to create docker network embed-certs-20220516225628-2444 192.168.76.0/24: create docker network embed-certs-20220516225628-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d77daed2e80d96a0f29f1186daf7e328bea1bfa201346fa8dbe37421138a886a (br-d77daed2e80d): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 22:59:36.050861    4236 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220516225628-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d77daed2e80d96a0f29f1186daf7e328bea1bfa201346fa8dbe37421138a886a (br-d77daed2e80d): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220516225628-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d77daed2e80d96a0f29f1186daf7e328bea1bfa201346fa8dbe37421138a886a (br-d77daed2e80d): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 22:59:36.067589    4236 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 22:59:37.143014    4236 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0753441s)
	I0516 22:59:37.151674    4236 cli_runner.go:164] Run: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 22:59:38.226618    4236 cli_runner.go:211] docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 22:59:38.226664    4236 cli_runner.go:217] Completed: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0748089s)
	I0516 22:59:38.226739    4236 client.go:171] LocalClient.Create took 10.1266263s
	I0516 22:59:40.238981    4236 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:59:40.246764    4236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:59:41.381862    4236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:41.381862    4236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.1349507s)
	I0516 22:59:41.381862    4236 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:41.565148    4236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:59:42.643763    4236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:42.643849    4236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0784352s)
	W0516 22:59:42.643893    4236 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	W0516 22:59:42.643893    4236 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:42.654154    4236 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:59:42.661870    4236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:59:43.738825    4236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:43.738967    4236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0767752s)
	I0516 22:59:43.738967    4236 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:43.954054    4236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:59:45.030420    4236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:45.030420    4236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0763565s)
	W0516 22:59:45.030420    4236 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	W0516 22:59:45.030420    4236 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:45.030420    4236 start.go:134] duration metric: createHost completed in 16.9351932s
	I0516 22:59:45.041420    4236 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 22:59:45.048417    4236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:59:46.178164    4236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:46.178164    4236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.1297374s)
	I0516 22:59:46.178164    4236 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:46.529744    4236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:59:47.616642    4236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:47.616642    4236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0868888s)
	W0516 22:59:47.616642    4236 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	W0516 22:59:47.616642    4236 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:47.626646    4236 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 22:59:47.633639    4236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:59:48.707235    4236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:48.707235    4236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0735861s)
	I0516 22:59:48.707235    4236 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:48.937428    4236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 22:59:50.047029    4236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:50.047029    4236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.1095912s)
	W0516 22:59:50.047029    4236 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	W0516 22:59:50.047029    4236 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:50.047029    4236 fix.go:57] fixHost completed within 53.8504789s
	I0516 22:59:50.047029    4236 start.go:81] releasing machines lock for "embed-certs-20220516225628-2444", held for 53.8504789s
	W0516 22:59:50.047029    4236 start.go:608] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220516225628-2444 container: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220516225628-2444: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220516225628-2444': mkdir /var/lib/docker/volumes/embed-certs-20220516225628-2444: read-only file system
	W0516 22:59:50.048028    4236 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220516225628-2444 container: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220516225628-2444: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220516225628-2444': mkdir /var/lib/docker/volumes/embed-certs-20220516225628-2444: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220516225628-2444 container: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220516225628-2444: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220516225628-2444': mkdir /var/lib/docker/volumes/embed-certs-20220516225628-2444: read-only file system
	
	I0516 22:59:50.048028    4236 start.go:623] Will try again in 5 seconds ...
	I0516 22:59:55.056765    4236 start.go:352] acquiring machines lock for embed-certs-20220516225628-2444: {Name:mk313f3adfa614f48756e4c4bd1949083e33b93c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 22:59:55.056861    4236 start.go:356] acquired machines lock for "embed-certs-20220516225628-2444" in 0s
	I0516 22:59:55.056861    4236 start.go:94] Skipping create...Using existing machine configuration
	I0516 22:59:55.056861    4236 fix.go:55] fixHost starting: 
	I0516 22:59:55.076541    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:59:56.151860    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:56.151860    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0752395s)
	I0516 22:59:56.151860    4236 fix.go:103] recreateIfNeeded on embed-certs-20220516225628-2444: state= err=unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:56.151860    4236 fix.go:108] machineExists: false. err=machine does not exist
	I0516 22:59:56.155900    4236 out.go:177] * docker "embed-certs-20220516225628-2444" container is missing, will recreate.
	I0516 22:59:56.158759    4236 delete.go:124] DEMOLISHING embed-certs-20220516225628-2444 ...
	I0516 22:59:56.176319    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:59:57.272240    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:57.272240    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0957754s)
	W0516 22:59:57.272240    4236 stop.go:75] unable to get state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:57.272240    4236 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:57.292466    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 22:59:58.380636    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 22:59:58.380636    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0880389s)
	I0516 22:59:58.380636    4236 delete.go:82] Unable to get host status for embed-certs-20220516225628-2444, assuming it has already been deleted: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 22:59:58.389367    4236 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220516225628-2444
	W0516 22:59:59.530887    4236 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220516225628-2444 returned with exit code 1
	I0516 22:59:59.530887    4236 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220516225628-2444: (1.1415112s)
	I0516 22:59:59.530887    4236 kic.go:356] could not find the container embed-certs-20220516225628-2444 to remove it. will try anyways
	I0516 22:59:59.543055    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 23:00:00.645546    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:00:00.645546    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.102319s)
	W0516 23:00:00.645859    4236 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:00.656373    4236 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-20220516225628-2444 /bin/bash -c "sudo init 0"
	W0516 23:00:01.774089    4236 cli_runner.go:211] docker exec --privileged -t embed-certs-20220516225628-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 23:00:01.774089    4236 cli_runner.go:217] Completed: docker exec --privileged -t embed-certs-20220516225628-2444 /bin/bash -c "sudo init 0": (1.1177065s)
	I0516 23:00:01.774089    4236 oci.go:641] error shutdown embed-certs-20220516225628-2444: docker exec --privileged -t embed-certs-20220516225628-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:02.789993    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 23:00:03.907318    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:00:03.907318    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.1173158s)
	I0516 23:00:03.907318    4236 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:03.907318    4236 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 23:00:03.907318    4236 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:04.401303    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 23:00:05.496917    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:00:05.496917    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0956052s)
	I0516 23:00:05.496917    4236 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:05.496917    4236 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 23:00:05.496917    4236 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:06.102386    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 23:00:07.218247    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:00:07.218247    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.1158519s)
	I0516 23:00:07.218247    4236 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:07.218247    4236 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 23:00:07.218247    4236 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:08.130811    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 23:00:09.201010    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:00:09.201113    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0700682s)
	I0516 23:00:09.201113    4236 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:09.201113    4236 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 23:00:09.201113    4236 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:11.208391    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 23:00:12.291466    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:00:12.291466    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.0830663s)
	I0516 23:00:12.291466    4236 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:12.291466    4236 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 23:00:12.291466    4236 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:14.131076    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 23:00:15.237445    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:00:15.237445    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.1063596s)
	I0516 23:00:15.237445    4236 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:15.237445    4236 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 23:00:15.237445    4236 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:17.917039    4236 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 23:00:19.049173    4236 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:00:19.049173    4236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (1.1319739s)
	I0516 23:00:19.049173    4236 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:19.049173    4236 oci.go:655] temporary error: container embed-certs-20220516225628-2444 status is  but expect it to be exited
	I0516 23:00:19.049173    4236 oci.go:88] couldn't shut down embed-certs-20220516225628-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	 
	I0516 23:00:19.058162    4236 cli_runner.go:164] Run: docker rm -f -v embed-certs-20220516225628-2444
	I0516 23:00:20.159003    4236 cli_runner.go:217] Completed: docker rm -f -v embed-certs-20220516225628-2444: (1.100832s)
	I0516 23:00:20.168498    4236 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220516225628-2444
	W0516 23:00:21.262807    4236 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220516225628-2444 returned with exit code 1
	I0516 23:00:21.262807    4236 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220516225628-2444: (1.0942998s)
	I0516 23:00:21.273116    4236 cli_runner.go:164] Run: docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:00:22.360897    4236 cli_runner.go:211] docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:00:22.360897    4236 cli_runner.go:217] Completed: docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0877717s)
	I0516 23:00:22.368653    4236 network_create.go:272] running [docker network inspect embed-certs-20220516225628-2444] to gather additional debugging logs...
	I0516 23:00:22.368653    4236 cli_runner.go:164] Run: docker network inspect embed-certs-20220516225628-2444
	W0516 23:00:23.450249    4236 cli_runner.go:211] docker network inspect embed-certs-20220516225628-2444 returned with exit code 1
	I0516 23:00:23.450458    4236 cli_runner.go:217] Completed: docker network inspect embed-certs-20220516225628-2444: (1.0815868s)
	I0516 23:00:23.450458    4236 network_create.go:275] error running [docker network inspect embed-certs-20220516225628-2444]: docker network inspect embed-certs-20220516225628-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220516225628-2444
	I0516 23:00:23.450458    4236 network_create.go:277] output of [docker network inspect embed-certs-20220516225628-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220516225628-2444
	
	** /stderr **
	W0516 23:00:23.451493    4236 delete.go:139] delete failed (probably ok) <nil>
	I0516 23:00:23.451493    4236 fix.go:115] Sleeping 1 second for extra luck!
	I0516 23:00:24.463263    4236 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:00:24.468029    4236 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 23:00:24.468067    4236 start.go:165] libmachine.API.Create for "embed-certs-20220516225628-2444" (driver="docker")
	I0516 23:00:24.468067    4236 client.go:168] LocalClient.Create starting
	I0516 23:00:24.468667    4236 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:00:24.468667    4236 main.go:134] libmachine: Decoding PEM data...
	I0516 23:00:24.468667    4236 main.go:134] libmachine: Parsing certificate...
	I0516 23:00:24.469289    4236 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:00:24.469289    4236 main.go:134] libmachine: Decoding PEM data...
	I0516 23:00:24.469289    4236 main.go:134] libmachine: Parsing certificate...
	I0516 23:00:24.483462    4236 cli_runner.go:164] Run: docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:00:25.592872    4236 cli_runner.go:211] docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:00:25.592872    4236 cli_runner.go:217] Completed: docker network inspect embed-certs-20220516225628-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1094007s)
	I0516 23:00:25.599872    4236 network_create.go:272] running [docker network inspect embed-certs-20220516225628-2444] to gather additional debugging logs...
	I0516 23:00:25.599872    4236 cli_runner.go:164] Run: docker network inspect embed-certs-20220516225628-2444
	W0516 23:00:26.697294    4236 cli_runner.go:211] docker network inspect embed-certs-20220516225628-2444 returned with exit code 1
	I0516 23:00:26.697294    4236 cli_runner.go:217] Completed: docker network inspect embed-certs-20220516225628-2444: (1.0974128s)
	I0516 23:00:26.697294    4236 network_create.go:275] error running [docker network inspect embed-certs-20220516225628-2444]: docker network inspect embed-certs-20220516225628-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220516225628-2444
	I0516 23:00:26.697294    4236 network_create.go:277] output of [docker network inspect embed-certs-20220516225628-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220516225628-2444
	
	** /stderr **
	I0516 23:00:26.704298    4236 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:00:27.801245    4236 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0969373s)
	I0516 23:00:27.818255    4236 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000de1e8] amended:true}} dirty:map[192.168.49.0:0xc0000de1e8 192.168.58.0:0xc0006c8ac0 192.168.67.0:0xc0000de488 192.168.76.0:0xc0006c8c18] misses:2}
	I0516 23:00:27.818255    4236 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:00:27.834336    4236 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000de1e8] amended:true}} dirty:map[192.168.49.0:0xc0000de1e8 192.168.58.0:0xc0006c8ac0 192.168.67.0:0xc0000de488 192.168.76.0:0xc0006c8c18] misses:3}
	I0516 23:00:27.834336    4236 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:00:27.849307    4236 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000de1e8 192.168.58.0:0xc0006c8ac0 192.168.67.0:0xc0000de488 192.168.76.0:0xc0006c8c18] amended:false}} dirty:map[] misses:0}
	I0516 23:00:27.849307    4236 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:00:27.864289    4236 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000de1e8 192.168.58.0:0xc0006c8ac0 192.168.67.0:0xc0000de488 192.168.76.0:0xc0006c8c18] amended:false}} dirty:map[] misses:0}
	I0516 23:00:27.864289    4236 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:00:27.880252    4236 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000de1e8 192.168.58.0:0xc0006c8ac0 192.168.67.0:0xc0000de488 192.168.76.0:0xc0006c8c18] amended:true}} dirty:map[192.168.49.0:0xc0000de1e8 192.168.58.0:0xc0006c8ac0 192.168.67.0:0xc0000de488 192.168.76.0:0xc0006c8c18 192.168.85.0:0xc0006c8808] misses:0}
	I0516 23:00:27.880252    4236 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:00:27.880252    4236 network_create.go:115] attempt to create docker network embed-certs-20220516225628-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 23:00:27.888247    4236 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444
	W0516 23:00:28.995954    4236 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444 returned with exit code 1
	I0516 23:00:28.995954    4236 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: (1.1073916s)
	E0516 23:00:28.995954    4236 network_create.go:104] error while trying to create docker network embed-certs-20220516225628-2444 192.168.85.0/24: create docker network embed-certs-20220516225628-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 83b8564568b6395c7b2d36a6d416313fe6d141607222f38600f846b2fd61222f (br-83b8564568b6): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 23:00:28.995954    4236 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220516225628-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 83b8564568b6395c7b2d36a6d416313fe6d141607222f38600f846b2fd61222f (br-83b8564568b6): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220516225628-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 83b8564568b6395c7b2d36a6d416313fe6d141607222f38600f846b2fd61222f (br-83b8564568b6): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 23:00:29.012096    4236 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:00:30.104863    4236 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0926634s)
	I0516 23:00:30.114430    4236 cli_runner.go:164] Run: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:00:31.225695    4236 cli_runner.go:211] docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:00:31.225695    4236 cli_runner.go:217] Completed: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: (1.1106796s)
	I0516 23:00:31.225695    4236 client.go:171] LocalClient.Create took 6.7575707s
	I0516 23:00:33.248147    4236 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:00:33.254718    4236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 23:00:34.346271    4236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 23:00:34.346271    4236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0914422s)
	I0516 23:00:34.346271    4236 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:34.623607    4236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 23:00:35.744281    4236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 23:00:35.744281    4236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.1206649s)
	W0516 23:00:35.744281    4236 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	W0516 23:00:35.744281    4236 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:35.755282    4236 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:00:35.763282    4236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 23:00:36.844954    4236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 23:00:36.844954    4236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0816628s)
	I0516 23:00:36.844954    4236 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:37.060889    4236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 23:00:38.130066    4236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 23:00:38.130066    4236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0691687s)
	W0516 23:00:38.130066    4236 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	W0516 23:00:38.130066    4236 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:38.130066    4236 start.go:134] duration metric: createHost completed in 13.6665522s
	I0516 23:00:38.139074    4236 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:00:38.147101    4236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 23:00:39.247841    4236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 23:00:39.247929    4236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.100602s)
	I0516 23:00:39.247929    4236 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:39.573101    4236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 23:00:40.669345    4236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 23:00:40.669415    4236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.0961832s)
	W0516 23:00:40.669590    4236 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	W0516 23:00:40.669624    4236 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:40.681452    4236 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:00:40.689503    4236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 23:00:41.809444    4236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 23:00:41.809444    4236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.1199319s)
	I0516 23:00:41.809444    4236 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:42.165758    4236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444
	W0516 23:00:43.297935    4236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444 returned with exit code 1
	I0516 23:00:43.297935    4236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: (1.1321673s)
	W0516 23:00:43.297935    4236 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	W0516 23:00:43.297935    4236 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220516225628-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220516225628-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	I0516 23:00:43.297935    4236 fix.go:57] fixHost completed within 48.2406635s
	I0516 23:00:43.297935    4236 start.go:81] releasing machines lock for "embed-certs-20220516225628-2444", held for 48.2406635s
	W0516 23:00:43.297935    4236 out.go:239] * Failed to start docker container. Running "minikube delete -p embed-certs-20220516225628-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220516225628-2444 container: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220516225628-2444: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220516225628-2444': mkdir /var/lib/docker/volumes/embed-certs-20220516225628-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p embed-certs-20220516225628-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220516225628-2444 container: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220516225628-2444: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220516225628-2444': mkdir /var/lib/docker/volumes/embed-certs-20220516225628-2444: read-only file system
	
	I0516 23:00:43.302935    4236 out.go:177] 
	W0516 23:00:43.304938    4236 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220516225628-2444 container: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220516225628-2444: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220516225628-2444': mkdir /var/lib/docker/volumes/embed-certs-20220516225628-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220516225628-2444 container: docker volume create embed-certs-20220516225628-2444 --label name.minikube.sigs.k8s.io=embed-certs-20220516225628-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220516225628-2444: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220516225628-2444': mkdir /var/lib/docker/volumes/embed-certs-20220516225628-2444: read-only file system
	
	W0516 23:00:43.304938    4236 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 23:00:43.304938    4236 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 23:00:43.307932    4236 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p embed-certs-20220516225628-2444 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220516225628-2444

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220516225628-2444: exit status 1 (1.176595s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444: exit status 7 (2.9715961s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:00:47.654412    4848 status.go:247] status error: host: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220516225628-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (122.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (4.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20220516225533-2444" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220516225533-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220516225533-2444: exit status 1 (1.1673426s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444: exit status 7 (2.9816679s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:59:58.146672    4388 status.go:247] status error: host: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220516225533-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (4.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (4.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20220516225533-2444" does not exist
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220516225533-2444 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220516225533-2444 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (272.1792ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220516225533-2444" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20220516225533-2444 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220516225533-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220516225533-2444: exit status 1 (1.1866017s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444: exit status 7 (2.9915995s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:00:02.614211    8056 status.go:247] status error: host: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220516225533-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (4.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220516225533-2444 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220516225533-2444 "sudo crictl images -o json": exit status 80 (3.2121776s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_4.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220516225533-2444 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:306: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:306: v1.16.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220516225533-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220516225533-2444: exit status 1 (1.1312532s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444: exit status 7 (2.9883885s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:00:09.957926    1756 status.go:247] status error: host: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220516225533-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (7.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (11.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-20220516225533-2444 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220516225533-2444 --alsologtostderr -v=1: exit status 80 (3.2381468s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:00:10.219360    8152 out.go:296] Setting OutFile to fd 1668 ...
	I0516 23:00:10.284986    8152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:00:10.284986    8152 out.go:309] Setting ErrFile to fd 1636...
	I0516 23:00:10.284986    8152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:00:10.296421    8152 out.go:303] Setting JSON to false
	I0516 23:00:10.296421    8152 mustload.go:65] Loading cluster: old-k8s-version-20220516225533-2444
	I0516 23:00:10.296421    8152 config.go:178] Loaded profile config "old-k8s-version-20220516225533-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0516 23:00:10.315543    8152 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}
	W0516 23:00:12.923189    8152 cli_runner.go:211] docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:00:12.923189    8152 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: (2.6075127s)
	I0516 23:00:12.929508    8152 out.go:177] 
	W0516 23:00:12.932315    8152 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444
	
	W0516 23:00:12.932369    8152 out.go:239] * 
	* 
	W0516 23:00:13.165206    8152 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_10.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_10.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0516 23:00:13.168461    8152 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220516225533-2444 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220516225533-2444

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220516225533-2444: exit status 1 (1.1770459s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444: exit status 7 (2.9571558s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:00:17.360574    2720 status.go:247] status error: host: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220516225533-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220516225533-2444

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220516225533-2444: exit status 1 (1.1521954s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220516225533-2444 -n old-k8s-version-20220516225533-2444: exit status 7 (3.0512352s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:00:21.574322    7716 status.go:247] status error: host: state: unknown state "old-k8s-version-20220516225533-2444": docker container inspect old-k8s-version-20220516225533-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220516225533-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220516225533-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (11.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (4.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20220516225557-2444" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220516225557-2444

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220516225557-2444: exit status 1 (1.1783686s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444: exit status 7 (3.0683981s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:00:20.807128    3876 status.go:247] status error: host: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220516225557-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (4.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (4.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20220516225557-2444" does not exist
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220516225557-2444 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context no-preload-20220516225557-2444 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (247.321ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220516225557-2444" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-20220516225557-2444 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220516225557-2444

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220516225557-2444: exit status 1 (1.1927301s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444: exit status 7 (2.9483654s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:00:25.214698    3912 status.go:247] status error: host: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220516225557-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (4.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (7.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-20220516225557-2444 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p no-preload-20220516225557-2444 "sudo crictl images -o json": exit status 80 (3.2598853s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_4.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p no-preload-20220516225557-2444 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:306: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:306: v1.23.6 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.1-0",
- 	"k8s.gcr.io/kube-apiserver:v1.23.6",
- 	"k8s.gcr.io/kube-controller-manager:v1.23.6",
- 	"k8s.gcr.io/kube-proxy:v1.23.6",
- 	"k8s.gcr.io/kube-scheduler:v1.23.6",
- 	"k8s.gcr.io/pause:3.6",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220516225557-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220516225557-2444: exit status 1 (1.1780814s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444: exit status 7 (2.9907212s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:00:32.654993    8204 status.go:247] status error: host: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220516225557-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (7.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (11.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-20220516225557-2444 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p no-preload-20220516225557-2444 --alsologtostderr -v=1: exit status 80 (3.2473106s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:00:32.915708    1676 out.go:296] Setting OutFile to fd 1472 ...
	I0516 23:00:32.976762    1676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:00:32.976762    1676 out.go:309] Setting ErrFile to fd 1680...
	I0516 23:00:32.976762    1676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:00:32.986294    1676 out.go:303] Setting JSON to false
	I0516 23:00:32.986294    1676 mustload.go:65] Loading cluster: no-preload-20220516225557-2444
	I0516 23:00:32.987389    1676 config.go:178] Loaded profile config "no-preload-20220516225557-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:00:33.005907    1676 cli_runner.go:164] Run: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}
	W0516 23:00:35.605588    1676 cli_runner.go:211] docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:00:35.605657    1676 cli_runner.go:217] Completed: docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: (2.5995592s)
	I0516 23:00:35.616397    1676 out.go:177] 
	W0516 23:00:35.619532    1676 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444
	
	W0516 23:00:35.619532    1676 out.go:239] * 
	* 
	W0516 23:00:35.863291    1676 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_10.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_10.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0516 23:00:35.866414    1676 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p no-preload-20220516225557-2444 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220516225557-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220516225557-2444: exit status 1 (1.1995684s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444: exit status 7 (3.0060992s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:00:40.118103    8064 status.go:247] status error: host: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220516225557-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220516225557-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220516225557-2444: exit status 1 (1.1674833s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220516225557-2444 -n no-preload-20220516225557-2444: exit status 7 (3.1021985s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:00:44.397960    8384 status.go:247] status error: host: state: unknown state "no-preload-20220516225557-2444": docker container inspect no-preload-20220516225557-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220516225557-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220516225557-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (11.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (86.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220516230045-2444 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220516230045-2444 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m21.7390206s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220516230045-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node default-k8s-different-port-20220516230045-2444 in cluster default-k8s-different-port-20220516230045-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "default-k8s-different-port-20220516230045-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:00:45.722060    6068 out.go:296] Setting OutFile to fd 1524 ...
	I0516 23:00:45.782894    6068 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:00:45.782923    6068 out.go:309] Setting ErrFile to fd 1632...
	I0516 23:00:45.782969    6068 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:00:45.794734    6068 out.go:303] Setting JSON to false
	I0516 23:00:45.796902    6068 start.go:115] hostinfo: {"hostname":"minikube2","uptime":5158,"bootTime":1652736887,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 23:00:45.797902    6068 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 23:00:45.801186    6068 out.go:177] * [default-k8s-different-port-20220516230045-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 23:00:45.804424    6068 notify.go:193] Checking for updates...
	I0516 23:00:45.806913    6068 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 23:00:45.809723    6068 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 23:00:45.813027    6068 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 23:00:45.815501    6068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 23:00:45.818676    6068 config.go:178] Loaded profile config "cert-expiration-20220516225440-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:00:45.818676    6068 config.go:178] Loaded profile config "embed-certs-20220516225628-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:00:45.818676    6068 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:00:45.818676    6068 config.go:178] Loaded profile config "no-preload-20220516225557-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:00:45.818676    6068 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 23:00:48.505899    6068 docker.go:137] docker version: linux-20.10.14
	I0516 23:00:48.516370    6068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:00:50.659737    6068 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1432634s)
	I0516 23:00:50.659737    6068 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:00:49.6026311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:00:50.663738    6068 out.go:177] * Using the docker driver based on user configuration
	I0516 23:00:50.669738    6068 start.go:284] selected driver: docker
	I0516 23:00:50.669738    6068 start.go:806] validating driver "docker" against <nil>
	I0516 23:00:50.669738    6068 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 23:00:50.801830    6068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:00:52.966932    6068 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1650832s)
	I0516 23:00:52.966932    6068 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:00:51.8698592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:00:52.966932    6068 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 23:00:52.967968    6068 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 23:00:52.970935    6068 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 23:00:52.973936    6068 cni.go:95] Creating CNI manager for ""
	I0516 23:00:52.973936    6068 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 23:00:52.973936    6068 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220516230045-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220516230045-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 23:00:52.977979    6068 out.go:177] * Starting control plane node default-k8s-different-port-20220516230045-2444 in cluster default-k8s-different-port-20220516230045-2444
	I0516 23:00:52.979945    6068 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 23:00:52.982925    6068 out.go:177] * Pulling base image ...
	I0516 23:00:52.985967    6068 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 23:00:52.985967    6068 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 23:00:52.986971    6068 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 23:00:52.986971    6068 cache.go:57] Caching tarball of preloaded images
	I0516 23:00:52.986971    6068 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 23:00:52.986971    6068 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 23:00:52.987926    6068 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-different-port-20220516230045-2444\config.json ...
	I0516 23:00:52.987926    6068 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-different-port-20220516230045-2444\config.json: {Name:mk33b79d47bfcbaf90cdfa10523a5c9bb9b9bf73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 23:00:54.112105    6068 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 23:00:54.112247    6068 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:00:54.112484    6068 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:00:54.112484    6068 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 23:00:54.112484    6068 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 23:00:54.112484    6068 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 23:00:54.112484    6068 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 23:00:54.112484    6068 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 23:00:54.112484    6068 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:00:56.478042    6068 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 23:00:56.478042    6068 cache.go:206] Successfully downloaded all kic artifacts
	I0516 23:00:56.478042    6068 start.go:352] acquiring machines lock for default-k8s-different-port-20220516230045-2444: {Name:mkca2c0574e16790f4d61bb6412ca78505ef9070 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:00:56.478042    6068 start.go:356] acquired machines lock for "default-k8s-different-port-20220516230045-2444" in 0s
	I0516 23:00:56.478042    6068 start.go:91] Provisioning new machine with config: &{Name:default-k8s-different-port-20220516230045-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220516230045-2444 Na
mespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 23:00:56.478042    6068 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:00:56.482074    6068 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 23:00:56.482074    6068 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220516230045-2444" (driver="docker")
	I0516 23:00:56.483050    6068 client.go:168] LocalClient.Create starting
	I0516 23:00:56.483050    6068 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:00:56.483050    6068 main.go:134] libmachine: Decoding PEM data...
	I0516 23:00:56.483050    6068 main.go:134] libmachine: Parsing certificate...
	I0516 23:00:56.483050    6068 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:00:56.483050    6068 main.go:134] libmachine: Decoding PEM data...
	I0516 23:00:56.484055    6068 main.go:134] libmachine: Parsing certificate...
	I0516 23:00:56.492052    6068 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:00:57.654738    6068 cli_runner.go:211] docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:00:57.654847    6068 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1616258s)
	I0516 23:00:57.663572    6068 network_create.go:272] running [docker network inspect default-k8s-different-port-20220516230045-2444] to gather additional debugging logs...
	I0516 23:00:57.663572    6068 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220516230045-2444
	W0516 23:00:58.782858    6068 cli_runner.go:211] docker network inspect default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:00:58.782858    6068 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220516230045-2444: (1.1192769s)
	I0516 23:00:58.782858    6068 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220516230045-2444]: docker network inspect default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220516230045-2444
	I0516 23:00:58.782858    6068 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220516230045-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220516230045-2444
	
	** /stderr **
	I0516 23:00:58.790867    6068 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:00:59.912053    6068 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1211768s)
	I0516 23:00:59.931648    6068 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00032a5f0] misses:0}
	I0516 23:00:59.932015    6068 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:00:59.932015    6068 network_create.go:115] attempt to create docker network default-k8s-different-port-20220516230045-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 23:00:59.939483    6068 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444
	W0516 23:01:01.121599    6068 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:01:01.121708    6068 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: (1.1820372s)
	W0516 23:01:01.121708    6068 network_create.go:107] failed to create docker network default-k8s-different-port-20220516230045-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 23:01:01.141998    6068 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00032a5f0] amended:false}} dirty:map[] misses:0}
	I0516 23:01:01.141998    6068 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:01.163954    6068 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00032a5f0] amended:true}} dirty:map[192.168.49.0:0xc00032a5f0 192.168.58.0:0xc0005a24c8] misses:0}
	I0516 23:01:01.163954    6068 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:01.163954    6068 network_create.go:115] attempt to create docker network default-k8s-different-port-20220516230045-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 23:01:01.173039    6068 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444
	W0516 23:01:02.244558    6068 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:01:02.244558    6068 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: (1.0715096s)
	W0516 23:01:02.244558    6068 network_create.go:107] failed to create docker network default-k8s-different-port-20220516230045-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 23:01:02.263784    6068 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00032a5f0] amended:true}} dirty:map[192.168.49.0:0xc00032a5f0 192.168.58.0:0xc0005a24c8] misses:1}
	I0516 23:01:02.263784    6068 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:02.284946    6068 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00032a5f0] amended:true}} dirty:map[192.168.49.0:0xc00032a5f0 192.168.58.0:0xc0005a24c8 192.168.67.0:0xc00032a6b8] misses:1}
	I0516 23:01:02.284946    6068 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:02.284946    6068 network_create.go:115] attempt to create docker network default-k8s-different-port-20220516230045-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 23:01:02.292633    6068 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444
	W0516 23:01:03.397050    6068 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:01:03.397050    6068 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: (1.1044082s)
	W0516 23:01:03.397050    6068 network_create.go:107] failed to create docker network default-k8s-different-port-20220516230045-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 23:01:03.415541    6068 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00032a5f0] amended:true}} dirty:map[192.168.49.0:0xc00032a5f0 192.168.58.0:0xc0005a24c8 192.168.67.0:0xc00032a6b8] misses:2}
	I0516 23:01:03.415541    6068 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:03.434112    6068 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00032a5f0] amended:true}} dirty:map[192.168.49.0:0xc00032a5f0 192.168.58.0:0xc0005a24c8 192.168.67.0:0xc00032a6b8 192.168.76.0:0xc0005a25e0] misses:2}
	I0516 23:01:03.434112    6068 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:03.434112    6068 network_create.go:115] attempt to create docker network default-k8s-different-port-20220516230045-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 23:01:03.443190    6068 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444
	W0516 23:01:04.546925    6068 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:01:04.546925    6068 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: (1.1036052s)
	E0516 23:01:04.546925    6068 network_create.go:104] error while trying to create docker network default-k8s-different-port-20220516230045-2444 192.168.76.0/24: create docker network default-k8s-different-port-20220516230045-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6fc3e1eef4442f1433c36b743a48cf616abff9aa81b69bcd8dd39cdcc144fb3a (br-6fc3e1eef444): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 23:01:04.546925    6068 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220516230045-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6fc3e1eef4442f1433c36b743a48cf616abff9aa81b69bcd8dd39cdcc144fb3a (br-6fc3e1eef444): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220516230045-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6fc3e1eef4442f1433c36b743a48cf616abff9aa81b69bcd8dd39cdcc144fb3a (br-6fc3e1eef444): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 23:01:04.567326    6068 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:01:05.667028    6068 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0996932s)
	I0516 23:01:05.675043    6068 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:01:06.782887    6068 cli_runner.go:211] docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:01:06.782926    6068 cli_runner.go:217] Completed: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: (1.1077887s)
	I0516 23:01:06.782997    6068 client.go:171] LocalClient.Create took 10.2998593s
	I0516 23:01:08.805363    6068 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:01:08.813922    6068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:01:09.926016    6068 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:01:09.926016    6068 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.1120845s)
	I0516 23:01:09.926016    6068 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:10.220562    6068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:01:11.295273    6068 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:01:11.295273    6068 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.0747014s)
	W0516 23:01:11.295273    6068 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	W0516 23:01:11.295273    6068 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:11.312251    6068 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:01:11.321256    6068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:01:12.402082    6068 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:01:12.402221    6068 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.0808167s)
	I0516 23:01:12.402221    6068 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:12.708759    6068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:01:13.801980    6068 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:01:13.801980    6068 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.0930514s)
	W0516 23:01:13.801980    6068 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	W0516 23:01:13.801980    6068 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:13.801980    6068 start.go:134] duration metric: createHost completed in 17.3237904s
	I0516 23:01:13.801980    6068 start.go:81] releasing machines lock for "default-k8s-different-port-20220516230045-2444", held for 17.3237904s
	W0516 23:01:13.801980    6068 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220516230045-2444 container: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220516230045-2444: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444: read-only file system
	I0516 23:01:13.818793    6068 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:01:14.917373    6068 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:14.917544    6068 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0985703s)
	I0516 23:01:14.917621    6068 delete.go:82] Unable to get host status for default-k8s-different-port-20220516230045-2444, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	W0516 23:01:14.917621    6068 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220516230045-2444 container: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220516230045-2444: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220516230045-2444 container: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220516230045-2444: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444: read-only file system
	
	I0516 23:01:14.917621    6068 start.go:623] Will try again in 5 seconds ...
	I0516 23:01:19.923000    6068 start.go:352] acquiring machines lock for default-k8s-different-port-20220516230045-2444: {Name:mkca2c0574e16790f4d61bb6412ca78505ef9070 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:01:19.923000    6068 start.go:356] acquired machines lock for "default-k8s-different-port-20220516230045-2444" in 0s
	I0516 23:01:19.923000    6068 start.go:94] Skipping create...Using existing machine configuration
	I0516 23:01:19.923000    6068 fix.go:55] fixHost starting: 
	I0516 23:01:19.941158    6068 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:01:21.061361    6068 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:21.061361    6068 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.1201937s)
	I0516 23:01:21.061361    6068 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220516230045-2444: state= err=unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:21.061361    6068 fix.go:108] machineExists: false. err=machine does not exist
	I0516 23:01:21.071362    6068 out.go:177] * docker "default-k8s-different-port-20220516230045-2444" container is missing, will recreate.
	I0516 23:01:21.074361    6068 delete.go:124] DEMOLISHING default-k8s-different-port-20220516230045-2444 ...
	I0516 23:01:21.092363    6068 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:01:22.210940    6068 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:22.210940    6068 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.1183342s)
	W0516 23:01:22.210940    6068 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:22.210940    6068 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:22.229628    6068 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:01:23.361589    6068 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:23.361589    6068 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.1318873s)
	I0516 23:01:23.361589    6068 delete.go:82] Unable to get host status for default-k8s-different-port-20220516230045-2444, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:23.373476    6068 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444
	W0516 23:01:24.488819    6068 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:01:24.488819    6068 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444: (1.1153336s)
	I0516 23:01:24.488819    6068 kic.go:356] could not find the container default-k8s-different-port-20220516230045-2444 to remove it. will try anyways
	I0516 23:01:24.495816    6068 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:01:25.578055    6068 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:25.578055    6068 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0822295s)
	W0516 23:01:25.578055    6068 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:25.585055    6068 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-different-port-20220516230045-2444 /bin/bash -c "sudo init 0"
	W0516 23:01:26.684750    6068 cli_runner.go:211] docker exec --privileged -t default-k8s-different-port-20220516230045-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 23:01:26.684750    6068 cli_runner.go:217] Completed: docker exec --privileged -t default-k8s-different-port-20220516230045-2444 /bin/bash -c "sudo init 0": (1.0996854s)
	I0516 23:01:26.684750    6068 oci.go:641] error shutdown default-k8s-different-port-20220516230045-2444: docker exec --privileged -t default-k8s-different-port-20220516230045-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:27.707547    6068 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:01:28.795747    6068 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:28.795747    6068 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0881904s)
	I0516 23:01:28.795747    6068 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:28.795747    6068 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:01:28.795747    6068 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:29.272631    6068 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:01:30.404116    6068 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:30.404116    6068 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.1312145s)
	I0516 23:01:30.404116    6068 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:30.404116    6068 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:01:30.404116    6068 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:31.307956    6068 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:01:32.406670    6068 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:32.406670    6068 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0984834s)
	I0516 23:01:32.406790    6068 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:32.406790    6068 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:01:32.406865    6068 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:33.058051    6068 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:01:34.139857    6068 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:34.139857    6068 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0815665s)
	I0516 23:01:34.139857    6068 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:34.139857    6068 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:01:34.139857    6068 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:35.271122    6068 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:01:36.365152    6068 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:36.365152    6068 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0940201s)
	I0516 23:01:36.365152    6068 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:36.365152    6068 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:01:36.365152    6068 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:37.893548    6068 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:01:39.003285    6068 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:39.003285    6068 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.109518s)
	I0516 23:01:39.003285    6068 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:39.003285    6068 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:01:39.003285    6068 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:42.058343    6068 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:01:43.107742    6068 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:43.107860    6068 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0493903s)
	I0516 23:01:43.108007    6068 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:43.108098    6068 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:01:43.108149    6068 oci.go:88] couldn't shut down default-k8s-different-port-20220516230045-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	 
	I0516 23:01:43.117600    6068 cli_runner.go:164] Run: docker rm -f -v default-k8s-different-port-20220516230045-2444
	I0516 23:01:44.210407    6068 cli_runner.go:217] Completed: docker rm -f -v default-k8s-different-port-20220516230045-2444: (1.0927713s)
	I0516 23:01:44.219479    6068 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444
	W0516 23:01:45.283639    6068 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:01:45.283639    6068 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444: (1.064151s)
	I0516 23:01:45.293991    6068 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:01:46.396153    6068 cli_runner.go:211] docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:01:46.396189    6068 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.10204s)
	I0516 23:01:46.404472    6068 network_create.go:272] running [docker network inspect default-k8s-different-port-20220516230045-2444] to gather additional debugging logs...
	I0516 23:01:46.404472    6068 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220516230045-2444
	W0516 23:01:47.527193    6068 cli_runner.go:211] docker network inspect default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:01:47.527193    6068 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220516230045-2444: (1.1227116s)
	I0516 23:01:47.527193    6068 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220516230045-2444]: docker network inspect default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220516230045-2444
	I0516 23:01:47.527193    6068 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220516230045-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220516230045-2444
	
	** /stderr **
	W0516 23:01:47.528191    6068 delete.go:139] delete failed (probably ok) <nil>
	I0516 23:01:47.528191    6068 fix.go:115] Sleeping 1 second for extra luck!
	I0516 23:01:48.542420    6068 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:01:48.547366    6068 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 23:01:48.547717    6068 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220516230045-2444" (driver="docker")
	I0516 23:01:48.547745    6068 client.go:168] LocalClient.Create starting
	I0516 23:01:48.547929    6068 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:01:48.547929    6068 main.go:134] libmachine: Decoding PEM data...
	I0516 23:01:48.547929    6068 main.go:134] libmachine: Parsing certificate...
	I0516 23:01:48.548560    6068 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:01:48.548905    6068 main.go:134] libmachine: Decoding PEM data...
	I0516 23:01:48.548905    6068 main.go:134] libmachine: Parsing certificate...
	I0516 23:01:48.558415    6068 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:01:49.700032    6068 cli_runner.go:211] docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:01:49.700032    6068 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1416076s)
	I0516 23:01:49.709412    6068 network_create.go:272] running [docker network inspect default-k8s-different-port-20220516230045-2444] to gather additional debugging logs...
	I0516 23:01:49.709412    6068 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220516230045-2444
	W0516 23:01:50.830019    6068 cli_runner.go:211] docker network inspect default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:01:50.830019    6068 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220516230045-2444: (1.1205971s)
	I0516 23:01:50.830019    6068 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220516230045-2444]: docker network inspect default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220516230045-2444
	I0516 23:01:50.830019    6068 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220516230045-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220516230045-2444
	
	** /stderr **
	I0516 23:01:50.852398    6068 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:01:51.984953    6068 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.132546s)
	I0516 23:01:52.002397    6068 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00032a5f0] amended:true}} dirty:map[192.168.49.0:0xc00032a5f0 192.168.58.0:0xc0005a24c8 192.168.67.0:0xc00032a6b8 192.168.76.0:0xc0005a25e0] misses:2}
	I0516 23:01:52.002397    6068 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:52.020942    6068 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00032a5f0] amended:true}} dirty:map[192.168.49.0:0xc00032a5f0 192.168.58.0:0xc0005a24c8 192.168.67.0:0xc00032a6b8 192.168.76.0:0xc0005a25e0] misses:3}
	I0516 23:01:52.021059    6068 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:52.036882    6068 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00032a5f0 192.168.58.0:0xc0005a24c8 192.168.67.0:0xc00032a6b8 192.168.76.0:0xc0005a25e0] amended:false}} dirty:map[] misses:0}
	I0516 23:01:52.036882    6068 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:52.055529    6068 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00032a5f0 192.168.58.0:0xc0005a24c8 192.168.67.0:0xc00032a6b8 192.168.76.0:0xc0005a25e0] amended:false}} dirty:map[] misses:0}
	I0516 23:01:52.055529    6068 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:52.070946    6068 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00032a5f0 192.168.58.0:0xc0005a24c8 192.168.67.0:0xc00032a6b8 192.168.76.0:0xc0005a25e0] amended:true}} dirty:map[192.168.49.0:0xc00032a5f0 192.168.58.0:0xc0005a24c8 192.168.67.0:0xc00032a6b8 192.168.76.0:0xc0005a25e0 192.168.85.0:0xc000526370] misses:0}
	I0516 23:01:52.070946    6068 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:52.070946    6068 network_create.go:115] attempt to create docker network default-k8s-different-port-20220516230045-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 23:01:52.080569    6068 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444
	W0516 23:01:53.160241    6068 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:01:53.160978    6068 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: (1.0796178s)
	E0516 23:01:53.160978    6068 network_create.go:104] error while trying to create docker network default-k8s-different-port-20220516230045-2444 192.168.85.0/24: create docker network default-k8s-different-port-20220516230045-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 30a3a20ec5677473d3d5e7598a6b54f5bfd9c6f54664e3328902eea0693d40c8 (br-30a3a20ec567): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 23:01:53.160978    6068 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220516230045-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 30a3a20ec5677473d3d5e7598a6b54f5bfd9c6f54664e3328902eea0693d40c8 (br-30a3a20ec567): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220516230045-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 30a3a20ec5677473d3d5e7598a6b54f5bfd9c6f54664e3328902eea0693d40c8 (br-30a3a20ec567): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 23:01:53.179093    6068 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:01:54.221156    6068 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0418567s)
	I0516 23:01:54.229782    6068 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:01:55.310027    6068 cli_runner.go:211] docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:01:55.310027    6068 cli_runner.go:217] Completed: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0802357s)
	I0516 23:01:55.310027    6068 client.go:171] LocalClient.Create took 6.7621705s
	I0516 23:01:57.329795    6068 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:01:57.339040    6068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:01:58.404259    6068 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:01:58.404259    6068 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.0652095s)
	I0516 23:01:58.404259    6068 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:58.745742    6068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:01:59.853393    6068 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:01:59.853393    6068 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.1076411s)
	W0516 23:01:59.853393    6068 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	W0516 23:01:59.853393    6068 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:01:59.864384    6068 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:01:59.871384    6068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:02:00.988406    6068 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:02:00.988406    6068 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.1167217s)
	I0516 23:02:00.988406    6068 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:01.220583    6068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:02:02.300431    6068 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:02:02.300431    6068 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.0795874s)
	W0516 23:02:02.300431    6068 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	W0516 23:02:02.300431    6068 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:02.300431    6068 start.go:134] duration metric: createHost completed in 13.7578619s
	I0516 23:02:02.312123    6068 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:02:02.319052    6068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:02:03.399175    6068 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:02:03.399225    6068 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.0798849s)
	I0516 23:02:03.399384    6068 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:03.662221    6068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:02:04.744671    6068 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:02:04.744671    6068 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.0822083s)
	W0516 23:02:04.744671    6068 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	W0516 23:02:04.744671    6068 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:04.756435    6068 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:02:04.763280    6068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:02:05.858133    6068 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:02:05.858371    6068 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.0947835s)
	I0516 23:02:05.858696    6068 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:06.070697    6068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:02:07.185529    6068 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:02:07.185529    6068 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.1142484s)
	W0516 23:02:07.185529    6068 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	W0516 23:02:07.185529    6068 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:07.185529    6068 fix.go:57] fixHost completed within 47.2621266s
	I0516 23:02:07.185529    6068 start.go:81] releasing machines lock for "default-k8s-different-port-20220516230045-2444", held for 47.2621266s
	W0516 23:02:07.185529    6068 out.go:239] * Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220516230045-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220516230045-2444 container: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220516230045-2444: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220516230045-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220516230045-2444 container: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220516230045-2444: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444: read-only file system
	
	I0516 23:02:07.196635    6068 out.go:177] 
	W0516 23:02:07.199365    6068 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220516230045-2444 container: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220516230045-2444: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220516230045-2444 container: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220516230045-2444: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444: read-only file system
	
	W0516 23:02:07.199625    6068 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 23:02:07.199625    6068 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 23:02:07.203595    6068 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220516230045-2444 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220516230045-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220516230045-2444: exit status 1 (1.2124999s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444: exit status 7 (3.0485114s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:02:11.547311    8280 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220516230045-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (86.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (4.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20220516225628-2444" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220516225628-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220516225628-2444: exit status 1 (1.140944s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444: exit status 7 (2.9778644s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:00:51.784418    8204 status.go:247] status error: host: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220516225628-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (4.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (4.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20220516225628-2444" does not exist
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context embed-certs-20220516225628-2444 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context embed-certs-20220516225628-2444 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (244.1266ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220516225628-2444" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-20220516225628-2444 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220516225628-2444

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220516225628-2444: exit status 1 (1.157486s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444: exit status 7 (3.0262851s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:00:56.230996    6580 status.go:247] status error: host: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220516225628-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (4.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (7.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-20220516225628-2444 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p embed-certs-20220516225628-2444 "sudo crictl images -o json": exit status 80 (3.2774122s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_4.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p embed-certs-20220516225628-2444 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:306: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:306: v1.23.6 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.1-0",
- 	"k8s.gcr.io/kube-apiserver:v1.23.6",
- 	"k8s.gcr.io/kube-controller-manager:v1.23.6",
- 	"k8s.gcr.io/kube-proxy:v1.23.6",
- 	"k8s.gcr.io/kube-scheduler:v1.23.6",
- 	"k8s.gcr.io/pause:3.6",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220516225628-2444

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220516225628-2444: exit status 1 (1.1602999s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444: exit status 7 (3.013535s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:01:03.694977    2092 status.go:247] status error: host: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220516225628-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (7.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (85.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220516230100-2444 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-20220516230100-2444 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m21.6441345s)

                                                
                                                
-- stdout --
	* [newest-cni-20220516230100-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node newest-cni-20220516230100-2444 in cluster newest-cni-20220516230100-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "newest-cni-20220516230100-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:01:01.123885    6312 out.go:296] Setting OutFile to fd 1964 ...
	I0516 23:01:01.188762    6312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:01:01.188762    6312 out.go:309] Setting ErrFile to fd 1720...
	I0516 23:01:01.189300    6312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:01:01.203864    6312 out.go:303] Setting JSON to false
	I0516 23:01:01.210268    6312 start.go:115] hostinfo: {"hostname":"minikube2","uptime":5173,"bootTime":1652736888,"procs":153,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 23:01:01.210268    6312 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 23:01:01.215125    6312 out.go:177] * [newest-cni-20220516230100-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 23:01:01.225133    6312 notify.go:193] Checking for updates...
	I0516 23:01:01.227285    6312 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 23:01:01.229895    6312 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 23:01:01.232379    6312 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 23:01:01.235020    6312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 23:01:01.239231    6312 config.go:178] Loaded profile config "cert-expiration-20220516225440-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:01:01.239384    6312 config.go:178] Loaded profile config "default-k8s-different-port-20220516230045-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:01:01.239950    6312 config.go:178] Loaded profile config "embed-certs-20220516225628-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:01:01.240479    6312 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:01:01.240595    6312 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 23:01:03.983365    6312 docker.go:137] docker version: linux-20.10.14
	I0516 23:01:03.993368    6312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:01:06.150220    6312 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1566263s)
	I0516 23:01:06.150842    6312 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:01:05.0569608 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:01:06.154174    6312 out.go:177] * Using the docker driver based on user configuration
	I0516 23:01:06.157649    6312 start.go:284] selected driver: docker
	I0516 23:01:06.157649    6312 start.go:806] validating driver "docker" against <nil>
	I0516 23:01:06.157649    6312 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 23:01:06.236226    6312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:01:08.353159    6312 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1167088s)
	I0516 23:01:08.353159    6312 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:01:07.2911771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:01:08.353159    6312 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	W0516 23:01:08.353159    6312 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0516 23:01:08.353854    6312 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0516 23:01:08.358924    6312 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 23:01:08.361058    6312 cni.go:95] Creating CNI manager for ""
	I0516 23:01:08.361058    6312 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 23:01:08.361058    6312 start_flags.go:306] config:
	{Name:newest-cni-20220516230100-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220516230100-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 23:01:08.364613    6312 out.go:177] * Starting control plane node newest-cni-20220516230100-2444 in cluster newest-cni-20220516230100-2444
	I0516 23:01:08.366379    6312 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 23:01:08.369425    6312 out.go:177] * Pulling base image ...
	I0516 23:01:08.372195    6312 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 23:01:08.372195    6312 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 23:01:08.372351    6312 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 23:01:08.372351    6312 cache.go:57] Caching tarball of preloaded images
	I0516 23:01:08.372351    6312 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 23:01:08.372965    6312 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 23:01:08.372965    6312 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\newest-cni-20220516230100-2444\config.json ...
	I0516 23:01:08.372965    6312 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\newest-cni-20220516230100-2444\config.json: {Name:mk9d93b3885fa1a85261a8c0cf1361170aa665c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 23:01:09.477262    6312 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 23:01:09.477405    6312 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:01:09.477526    6312 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:01:09.477526    6312 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 23:01:09.477526    6312 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 23:01:09.477526    6312 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 23:01:09.477526    6312 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 23:01:09.477526    6312 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 23:01:09.478149    6312 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:01:11.805144    6312 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 23:01:11.805278    6312 cache.go:206] Successfully downloaded all kic artifacts
	I0516 23:01:11.805370    6312 start.go:352] acquiring machines lock for newest-cni-20220516230100-2444: {Name:mk1391c96b8bd2d1f34dcc3d7a2394a9d5104457 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:01:11.805689    6312 start.go:356] acquired machines lock for "newest-cni-20220516230100-2444" in 300.1µs
	I0516 23:01:11.805986    6312 start.go:91] Provisioning new machine with config: &{Name:newest-cni-20220516230100-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220516230100-2444 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 23:01:11.806269    6312 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:01:11.809696    6312 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 23:01:11.810171    6312 start.go:165] libmachine.API.Create for "newest-cni-20220516230100-2444" (driver="docker")
	I0516 23:01:11.810224    6312 client.go:168] LocalClient.Create starting
	I0516 23:01:11.810745    6312 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:01:11.811007    6312 main.go:134] libmachine: Decoding PEM data...
	I0516 23:01:11.811061    6312 main.go:134] libmachine: Parsing certificate...
	I0516 23:01:11.811284    6312 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:01:11.811284    6312 main.go:134] libmachine: Decoding PEM data...
	I0516 23:01:11.811284    6312 main.go:134] libmachine: Parsing certificate...
	I0516 23:01:11.822540    6312 cli_runner.go:164] Run: docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:01:12.962376    6312 cli_runner.go:211] docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:01:12.962376    6312 cli_runner.go:217] Completed: docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1398264s)
	I0516 23:01:12.972849    6312 network_create.go:272] running [docker network inspect newest-cni-20220516230100-2444] to gather additional debugging logs...
	I0516 23:01:12.972849    6312 cli_runner.go:164] Run: docker network inspect newest-cni-20220516230100-2444
	W0516 23:01:14.052373    6312 cli_runner.go:211] docker network inspect newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:01:14.052432    6312 cli_runner.go:217] Completed: docker network inspect newest-cni-20220516230100-2444: (1.0793273s)
	I0516 23:01:14.052467    6312 network_create.go:275] error running [docker network inspect newest-cni-20220516230100-2444]: docker network inspect newest-cni-20220516230100-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220516230100-2444
	I0516 23:01:14.052467    6312 network_create.go:277] output of [docker network inspect newest-cni-20220516230100-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220516230100-2444
	
	** /stderr **
	I0516 23:01:14.061548    6312 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:01:15.137339    6312 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0756962s)
	I0516 23:01:15.160472    6312 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005164e8] misses:0}
	I0516 23:01:15.160472    6312 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:15.160472    6312 network_create.go:115] attempt to create docker network newest-cni-20220516230100-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 23:01:15.169967    6312 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444
	W0516 23:01:16.254279    6312 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:01:16.254385    6312 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: (1.0841822s)
	W0516 23:01:16.254385    6312 network_create.go:107] failed to create docker network newest-cni-20220516230100-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 23:01:16.273394    6312 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005164e8] amended:false}} dirty:map[] misses:0}
	I0516 23:01:16.273394    6312 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:16.292408    6312 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005164e8] amended:true}} dirty:map[192.168.49.0:0xc0005164e8 192.168.58.0:0xc000006448] misses:0}
	I0516 23:01:16.292408    6312 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:16.292408    6312 network_create.go:115] attempt to create docker network newest-cni-20220516230100-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 23:01:16.299774    6312 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444
	W0516 23:01:17.348255    6312 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:01:17.348367    6312 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: (1.0484717s)
	W0516 23:01:17.348367    6312 network_create.go:107] failed to create docker network newest-cni-20220516230100-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 23:01:17.368641    6312 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005164e8] amended:true}} dirty:map[192.168.49.0:0xc0005164e8 192.168.58.0:0xc000006448] misses:1}
	I0516 23:01:17.369037    6312 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:17.389028    6312 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005164e8] amended:true}} dirty:map[192.168.49.0:0xc0005164e8 192.168.58.0:0xc000006448 192.168.67.0:0xc0000d2260] misses:1}
	I0516 23:01:17.389872    6312 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:17.389872    6312 network_create.go:115] attempt to create docker network newest-cni-20220516230100-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 23:01:17.400037    6312 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444
	W0516 23:01:18.529341    6312 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:01:18.529341    6312 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: (1.1292935s)
	W0516 23:01:18.529341    6312 network_create.go:107] failed to create docker network newest-cni-20220516230100-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 23:01:18.549030    6312 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005164e8] amended:true}} dirty:map[192.168.49.0:0xc0005164e8 192.168.58.0:0xc000006448 192.168.67.0:0xc0000d2260] misses:2}
	I0516 23:01:18.549030    6312 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:18.569243    6312 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005164e8] amended:true}} dirty:map[192.168.49.0:0xc0005164e8 192.168.58.0:0xc000006448 192.168.67.0:0xc0000d2260 192.168.76.0:0xc000516580] misses:2}
	I0516 23:01:18.569336    6312 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:18.569391    6312 network_create.go:115] attempt to create docker network newest-cni-20220516230100-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 23:01:18.577273    6312 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444
	W0516 23:01:19.673606    6312 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:01:19.673606    6312 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: (1.096141s)
	E0516 23:01:19.673606    6312 network_create.go:104] error while trying to create docker network newest-cni-20220516230100-2444 192.168.76.0/24: create docker network newest-cni-20220516230100-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b71261c5e04ca8f36afb02665e49ce22e52010412bb0e1cd2ff8e6331871583e (br-b71261c5e04c): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 23:01:19.674304    6312 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220516230100-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b71261c5e04ca8f36afb02665e49ce22e52010412bb0e1cd2ff8e6331871583e (br-b71261c5e04c): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220516230100-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b71261c5e04ca8f36afb02665e49ce22e52010412bb0e1cd2ff8e6331871583e (br-b71261c5e04c): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 23:01:19.690597    6312 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:01:20.823948    6312 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.133193s)
	I0516 23:01:20.835336    6312 cli_runner.go:164] Run: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:01:21.959537    6312 cli_runner.go:211] docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:01:21.959537    6312 cli_runner.go:217] Completed: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: (1.1241914s)
	I0516 23:01:21.959537    6312 client.go:171] LocalClient.Create took 10.1491644s
	I0516 23:01:23.975070    6312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:01:23.985350    6312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:01:25.057863    6312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:01:25.058027    6312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.0725035s)
	I0516 23:01:25.058294    6312 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:25.354886    6312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:01:26.481561    6312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:01:26.481709    6312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1266653s)
	W0516 23:01:26.481748    6312 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	W0516 23:01:26.481748    6312 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:26.494791    6312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:01:26.502411    6312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:01:27.589399    6312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:01:27.589528    6312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.0869778s)
	I0516 23:01:27.589528    6312 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:27.895275    6312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:01:29.029071    6312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:01:29.029071    6312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1337864s)
	W0516 23:01:29.029071    6312 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	W0516 23:01:29.029071    6312 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:29.029071    6312 start.go:134] duration metric: createHost completed in 17.2226309s
	I0516 23:01:29.029071    6312 start.go:81] releasing machines lock for "newest-cni-20220516230100-2444", held for 17.2231708s
	W0516 23:01:29.029803    6312 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for newest-cni-20220516230100-2444 container: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220516230100-2444: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220516230100-2444': mkdir /var/lib/docker/volumes/newest-cni-20220516230100-2444: read-only file system
	I0516 23:01:29.047063    6312 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:01:30.200657    6312 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:30.200657    6312 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.1533732s)
	I0516 23:01:30.200857    6312 delete.go:82] Unable to get host status for newest-cni-20220516230100-2444, assuming it has already been deleted: state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	W0516 23:01:30.201001    6312 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for newest-cni-20220516230100-2444 container: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220516230100-2444: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220516230100-2444': mkdir /var/lib/docker/volumes/newest-cni-20220516230100-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for newest-cni-20220516230100-2444 container: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220516230100-2444: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220516230100-2444': mkdir /var/lib/docker/volumes/newest-cni-20220516230100-2444: read-only file system
	
	I0516 23:01:30.201001    6312 start.go:623] Will try again in 5 seconds ...
	I0516 23:01:35.215823    6312 start.go:352] acquiring machines lock for newest-cni-20220516230100-2444: {Name:mk1391c96b8bd2d1f34dcc3d7a2394a9d5104457 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:01:35.216141    6312 start.go:356] acquired machines lock for "newest-cni-20220516230100-2444" in 247.4µs
	I0516 23:01:35.216344    6312 start.go:94] Skipping create...Using existing machine configuration
	I0516 23:01:35.216344    6312 fix.go:55] fixHost starting: 
	I0516 23:01:35.231235    6312 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:01:36.349265    6312 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:36.349265    6312 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.1179144s)
	I0516 23:01:36.349265    6312 fix.go:103] recreateIfNeeded on newest-cni-20220516230100-2444: state= err=unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:36.349265    6312 fix.go:108] machineExists: false. err=machine does not exist
	I0516 23:01:36.353057    6312 out.go:177] * docker "newest-cni-20220516230100-2444" container is missing, will recreate.
	I0516 23:01:36.355395    6312 delete.go:124] DEMOLISHING newest-cni-20220516230100-2444 ...
	I0516 23:01:36.374152    6312 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:01:37.474102    6312 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:37.474102    6312 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.099797s)
	W0516 23:01:37.474102    6312 stop.go:75] unable to get state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:37.474102    6312 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:37.490992    6312 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:01:38.564166    6312 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:38.564166    6312 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0731649s)
	I0516 23:01:38.564166    6312 delete.go:82] Unable to get host status for newest-cni-20220516230100-2444, assuming it has already been deleted: state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:38.571170    6312 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220516230100-2444
	W0516 23:01:39.698255    6312 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:01:39.698304    6312 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220516230100-2444: (1.1269503s)
	I0516 23:01:39.698349    6312 kic.go:356] could not find the container newest-cni-20220516230100-2444 to remove it. will try anyways
	I0516 23:01:39.707476    6312 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:01:40.789530    6312 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:40.789530    6312 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0820446s)
	W0516 23:01:40.789530    6312 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:40.798564    6312 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-20220516230100-2444 /bin/bash -c "sudo init 0"
	W0516 23:01:41.874910    6312 cli_runner.go:211] docker exec --privileged -t newest-cni-20220516230100-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 23:01:41.874987    6312 cli_runner.go:217] Completed: docker exec --privileged -t newest-cni-20220516230100-2444 /bin/bash -c "sudo init 0": (1.0760027s)
	I0516 23:01:41.874987    6312 oci.go:641] error shutdown newest-cni-20220516230100-2444: docker exec --privileged -t newest-cni-20220516230100-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:42.895857    6312 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:01:44.000503    6312 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:44.000731    6312 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.1046372s)
	I0516 23:01:44.000845    6312 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:44.000845    6312 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:01:44.000891    6312 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:44.486422    6312 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:01:45.563830    6312 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:45.563886    6312 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0772703s)
	I0516 23:01:45.563999    6312 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:45.564055    6312 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:01:45.564092    6312 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:46.467725    6312 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:01:47.543574    6312 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:47.543638    6312 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0758401s)
	I0516 23:01:47.543638    6312 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:47.543638    6312 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:01:47.543638    6312 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:48.197302    6312 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:01:49.322730    6312 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:49.322730    6312 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.1254186s)
	I0516 23:01:49.322730    6312 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:49.322730    6312 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:01:49.322730    6312 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:50.448548    6312 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:01:51.592475    6312 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:51.592475    6312 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.1439175s)
	I0516 23:01:51.592475    6312 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:51.592475    6312 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:01:51.592475    6312 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:53.124276    6312 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:01:54.191116    6312 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:54.191116    6312 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0667869s)
	I0516 23:01:54.191323    6312 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:54.191376    6312 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:01:54.191376    6312 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:57.250109    6312 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:01:58.326729    6312 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:58.326729    6312 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0762971s)
	I0516 23:01:58.327037    6312 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:01:58.327037    6312 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:01:58.327203    6312 oci.go:88] couldn't shut down newest-cni-20220516230100-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	 
	I0516 23:01:58.336374    6312 cli_runner.go:164] Run: docker rm -f -v newest-cni-20220516230100-2444
	I0516 23:01:59.414925    6312 cli_runner.go:217] Completed: docker rm -f -v newest-cni-20220516230100-2444: (1.0785419s)
	I0516 23:01:59.423262    6312 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220516230100-2444
	W0516 23:02:00.558952    6312 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:02:00.558952    6312 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220516230100-2444: (1.135681s)
	I0516 23:02:00.571658    6312 cli_runner.go:164] Run: docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:02:01.682360    6312 cli_runner.go:211] docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:02:01.682360    6312 cli_runner.go:217] Completed: docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1102524s)
	I0516 23:02:01.691152    6312 network_create.go:272] running [docker network inspect newest-cni-20220516230100-2444] to gather additional debugging logs...
	I0516 23:02:01.691152    6312 cli_runner.go:164] Run: docker network inspect newest-cni-20220516230100-2444
	W0516 23:02:02.754419    6312 cli_runner.go:211] docker network inspect newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:02:02.754476    6312 cli_runner.go:217] Completed: docker network inspect newest-cni-20220516230100-2444: (1.0631986s)
	I0516 23:02:02.754476    6312 network_create.go:275] error running [docker network inspect newest-cni-20220516230100-2444]: docker network inspect newest-cni-20220516230100-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220516230100-2444
	I0516 23:02:02.754476    6312 network_create.go:277] output of [docker network inspect newest-cni-20220516230100-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220516230100-2444
	
	** /stderr **
	W0516 23:02:02.755819    6312 delete.go:139] delete failed (probably ok) <nil>
	I0516 23:02:02.755819    6312 fix.go:115] Sleeping 1 second for extra luck!
	I0516 23:02:03.761059    6312 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:02:03.767790    6312 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 23:02:03.768527    6312 start.go:165] libmachine.API.Create for "newest-cni-20220516230100-2444" (driver="docker")
	I0516 23:02:03.768635    6312 client.go:168] LocalClient.Create starting
	I0516 23:02:03.769171    6312 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:02:03.769439    6312 main.go:134] libmachine: Decoding PEM data...
	I0516 23:02:03.769486    6312 main.go:134] libmachine: Parsing certificate...
	I0516 23:02:03.769711    6312 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:02:03.769948    6312 main.go:134] libmachine: Decoding PEM data...
	I0516 23:02:03.769981    6312 main.go:134] libmachine: Parsing certificate...
	I0516 23:02:03.779202    6312 cli_runner.go:164] Run: docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:02:04.837057    6312 cli_runner.go:211] docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:02:04.837162    6312 cli_runner.go:217] Completed: docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0577151s)
	I0516 23:02:04.845445    6312 network_create.go:272] running [docker network inspect newest-cni-20220516230100-2444] to gather additional debugging logs...
	I0516 23:02:04.845445    6312 cli_runner.go:164] Run: docker network inspect newest-cni-20220516230100-2444
	W0516 23:02:05.920154    6312 cli_runner.go:211] docker network inspect newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:02:05.920340    6312 cli_runner.go:217] Completed: docker network inspect newest-cni-20220516230100-2444: (1.0745832s)
	I0516 23:02:05.920368    6312 network_create.go:275] error running [docker network inspect newest-cni-20220516230100-2444]: docker network inspect newest-cni-20220516230100-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220516230100-2444
	I0516 23:02:05.920415    6312 network_create.go:277] output of [docker network inspect newest-cni-20220516230100-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220516230100-2444
	
	** /stderr **
	I0516 23:02:05.930272    6312 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:02:07.016066    6312 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0853994s)
	I0516 23:02:07.031850    6312 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005164e8] amended:true}} dirty:map[192.168.49.0:0xc0005164e8 192.168.58.0:0xc000006448 192.168.67.0:0xc0000d2260 192.168.76.0:0xc000516580] misses:2}
	I0516 23:02:07.032871    6312 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:02:07.046787    6312 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005164e8] amended:true}} dirty:map[192.168.49.0:0xc0005164e8 192.168.58.0:0xc000006448 192.168.67.0:0xc0000d2260 192.168.76.0:0xc000516580] misses:3}
	I0516 23:02:07.046787    6312 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:02:07.061503    6312 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005164e8 192.168.58.0:0xc000006448 192.168.67.0:0xc0000d2260 192.168.76.0:0xc000516580] amended:false}} dirty:map[] misses:0}
	I0516 23:02:07.061503    6312 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:02:07.078825    6312 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005164e8 192.168.58.0:0xc000006448 192.168.67.0:0xc0000d2260 192.168.76.0:0xc000516580] amended:false}} dirty:map[] misses:0}
	I0516 23:02:07.078825    6312 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:02:07.096529    6312 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005164e8 192.168.58.0:0xc000006448 192.168.67.0:0xc0000d2260 192.168.76.0:0xc000516580] amended:true}} dirty:map[192.168.49.0:0xc0005164e8 192.168.58.0:0xc000006448 192.168.67.0:0xc0000d2260 192.168.76.0:0xc000516580 192.168.85.0:0xc0000d25b0] misses:0}
	I0516 23:02:07.096666    6312 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:02:07.096704    6312 network_create.go:115] attempt to create docker network newest-cni-20220516230100-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 23:02:07.105532    6312 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444
	W0516 23:02:08.283620    6312 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:02:08.283620    6312 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: (1.1780773s)
	E0516 23:02:08.283620    6312 network_create.go:104] error while trying to create docker network newest-cni-20220516230100-2444 192.168.85.0/24: create docker network newest-cni-20220516230100-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c03c58271fb169a4b99329bbdcb991a2c2ad243c8e207ab073782c0b3a1cfac0 (br-c03c58271fb1): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 23:02:08.283620    6312 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220516230100-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c03c58271fb169a4b99329bbdcb991a2c2ad243c8e207ab073782c0b3a1cfac0 (br-c03c58271fb1): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220516230100-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c03c58271fb169a4b99329bbdcb991a2c2ad243c8e207ab073782c0b3a1cfac0 (br-c03c58271fb1): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 23:02:08.299159    6312 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:02:09.426839    6312 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1275187s)
	I0516 23:02:09.434866    6312 cli_runner.go:164] Run: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:02:10.540301    6312 cli_runner.go:211] docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:02:10.540301    6312 cli_runner.go:217] Completed: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: (1.1054261s)
	I0516 23:02:10.540301    6312 client.go:171] LocalClient.Create took 6.7716088s
	I0516 23:02:12.560254    6312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:02:12.567903    6312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:02:13.649777    6312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:02:13.649777    6312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.0817779s)
	I0516 23:02:13.649777    6312 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:13.995680    6312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:02:15.057553    6312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:02:15.057653    6312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.0617065s)
	W0516 23:02:15.057653    6312 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	W0516 23:02:15.057653    6312 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:15.070647    6312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:02:15.079740    6312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:02:16.162994    6312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:02:16.162994    6312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.0831983s)
	I0516 23:02:16.162994    6312 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:16.392941    6312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:02:17.523790    6312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:02:17.523864    6312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1308399s)
	W0516 23:02:17.523864    6312 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	W0516 23:02:17.523864    6312 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:17.523864    6312 start.go:134] duration metric: createHost completed in 13.7626871s
	I0516 23:02:17.534688    6312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:02:17.541689    6312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:02:18.626537    6312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:02:18.626697    6312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.0847198s)
	I0516 23:02:18.626784    6312 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:18.887677    6312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:02:19.966872    6312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:02:19.966872    6312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.0791083s)
	W0516 23:02:19.966872    6312 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	W0516 23:02:19.966872    6312 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:19.978332    6312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:02:19.986390    6312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:02:21.112268    6312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:02:21.112315    6312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1256303s)
	I0516 23:02:21.112494    6312 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:21.326139    6312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:02:22.437750    6312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:02:22.437750    6312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1116015s)
	W0516 23:02:22.437750    6312 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	W0516 23:02:22.437750    6312 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:22.437750    6312 fix.go:57] fixHost completed within 47.2210033s
	I0516 23:02:22.437750    6312 start.go:81] releasing machines lock for "newest-cni-20220516230100-2444", held for 47.2211713s
	W0516 23:02:22.437750    6312 out.go:239] * Failed to start docker container. Running "minikube delete -p newest-cni-20220516230100-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220516230100-2444 container: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220516230100-2444: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220516230100-2444': mkdir /var/lib/docker/volumes/newest-cni-20220516230100-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p newest-cni-20220516230100-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220516230100-2444 container: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220516230100-2444: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220516230100-2444': mkdir /var/lib/docker/volumes/newest-cni-20220516230100-2444: read-only file system
	
	I0516 23:02:22.443889    6312 out.go:177] 
	W0516 23:02:22.446302    6312 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220516230100-2444 container: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220516230100-2444: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220516230100-2444': mkdir /var/lib/docker/volumes/newest-cni-20220516230100-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220516230100-2444 container: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220516230100-2444: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220516230100-2444': mkdir /var/lib/docker/volumes/newest-cni-20220516230100-2444: read-only file system
	
	W0516 23:02:22.446302    6312 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 23:02:22.446302    6312 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 23:02:22.452114    6312 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p newest-cni-20220516230100-2444 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220516230100-2444

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220516230100-2444: exit status 1 (1.2085207s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220516230100-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220516230100-2444 -n newest-cni-20220516230100-2444

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220516230100-2444 -n newest-cni-20220516230100-2444: exit status 7 (2.9458313s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:02:26.702650    5304 status.go:247] status error: host: state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220516230100-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (85.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (11.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-20220516225628-2444 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p embed-certs-20220516225628-2444 --alsologtostderr -v=1: exit status 80 (3.3098727s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:01:03.991369    3056 out.go:296] Setting OutFile to fd 1404 ...
	I0516 23:01:04.050360    3056 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:01:04.050360    3056 out.go:309] Setting ErrFile to fd 1388...
	I0516 23:01:04.050360    3056 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:01:04.061360    3056 out.go:303] Setting JSON to false
	I0516 23:01:04.061360    3056 mustload.go:65] Loading cluster: embed-certs-20220516225628-2444
	I0516 23:01:04.061360    3056 config.go:178] Loaded profile config "embed-certs-20220516225628-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:01:04.080372    3056 cli_runner.go:164] Run: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}
	W0516 23:01:06.706042    3056 cli_runner.go:211] docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:06.706042    3056 cli_runner.go:217] Completed: docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: (2.6255666s)
	I0516 23:01:06.709219    3056 out.go:177] 
	W0516 23:01:06.712250    3056 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444
	
	W0516 23:01:06.712250    3056 out.go:239] * 
	* 
	W0516 23:01:06.970776    3056 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_10.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_10.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0516 23:01:06.974766    3056 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p embed-certs-20220516225628-2444 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220516225628-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220516225628-2444: exit status 1 (1.1491884s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444: exit status 7 (3.0357343s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:01:11.202355    8976 status.go:247] status error: host: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220516225628-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220516225628-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220516225628-2444: exit status 1 (1.1190559s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220516225628-2444 -n embed-certs-20220516225628-2444: exit status 7 (3.0086308s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:01:15.340356    8708 status.go:247] status error: host: state: unknown state "embed-certs-20220516225628-2444": docker container inspect embed-certs-20220516225628-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220516225628-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220516225628-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (11.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-20220516225301-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p auto-20220516225301-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: exit status 60 (1m21.317215s)

                                                
                                                
-- stdout --
	* [auto-20220516225301-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node auto-20220516225301-2444 in cluster auto-20220516225301-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "auto-20220516225301-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:01:12.627267    4548 out.go:296] Setting OutFile to fd 1824 ...
	I0516 23:01:12.699761    4548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:01:12.699761    4548 out.go:309] Setting ErrFile to fd 1600...
	I0516 23:01:12.699761    4548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:01:12.710812    4548 out.go:303] Setting JSON to false
	I0516 23:01:12.712806    4548 start.go:115] hostinfo: {"hostname":"minikube2","uptime":5185,"bootTime":1652736887,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 23:01:12.712806    4548 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 23:01:12.718095    4548 out.go:177] * [auto-20220516225301-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 23:01:12.721505    4548 notify.go:193] Checking for updates...
	I0516 23:01:12.724409    4548 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 23:01:12.726637    4548 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 23:01:12.729099    4548 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 23:01:12.733170    4548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 23:01:12.736821    4548 config.go:178] Loaded profile config "default-k8s-different-port-20220516230045-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:01:12.736821    4548 config.go:178] Loaded profile config "embed-certs-20220516225628-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:01:12.737398    4548 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:01:12.737398    4548 config.go:178] Loaded profile config "newest-cni-20220516230100-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:01:12.737928    4548 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 23:01:15.451203    4548 docker.go:137] docker version: linux-20.10.14
	I0516 23:01:15.461685    4548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:01:17.566337    4548 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1044445s)
	I0516 23:01:17.566949    4548 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:01:16.5043042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:01:17.571000    4548 out.go:177] * Using the docker driver based on user configuration
	I0516 23:01:17.574193    4548 start.go:284] selected driver: docker
	I0516 23:01:17.574271    4548 start.go:806] validating driver "docker" against <nil>
	I0516 23:01:17.574302    4548 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 23:01:17.647029    4548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:01:19.814028    4548 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1669803s)
	I0516 23:01:19.814258    4548 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:01:18.7167943 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:01:19.814847    4548 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 23:01:19.815035    4548 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 23:01:19.818036    4548 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 23:01:19.820445    4548 cni.go:95] Creating CNI manager for ""
	I0516 23:01:19.820445    4548 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 23:01:19.820445    4548 start_flags.go:306] config:
	{Name:auto-20220516225301-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:auto-20220516225301-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 23:01:19.824234    4548 out.go:177] * Starting control plane node auto-20220516225301-2444 in cluster auto-20220516225301-2444
	I0516 23:01:19.826303    4548 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 23:01:19.829644    4548 out.go:177] * Pulling base image ...
	I0516 23:01:19.832081    4548 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 23:01:19.832081    4548 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 23:01:19.832193    4548 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 23:01:19.832193    4548 cache.go:57] Caching tarball of preloaded images
	I0516 23:01:19.832804    4548 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 23:01:19.832804    4548 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 23:01:19.832804    4548 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-20220516225301-2444\config.json ...
	I0516 23:01:19.833451    4548 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-20220516225301-2444\config.json: {Name:mk3d02cd3771b66684aedee18c1d3b9e9fc310b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 23:01:20.934334    4548 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 23:01:20.934541    4548 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:01:20.934673    4548 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:01:20.934673    4548 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 23:01:20.934673    4548 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 23:01:20.934673    4548 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 23:01:20.935256    4548 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 23:01:20.935324    4548 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 23:01:20.935389    4548 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:01:23.316992    4548 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 23:01:23.316992    4548 cache.go:206] Successfully downloaded all kic artifacts
	I0516 23:01:23.316992    4548 start.go:352] acquiring machines lock for auto-20220516225301-2444: {Name:mkd99cabc395769e899f2a04225201db6b5d24ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:01:23.316992    4548 start.go:356] acquired machines lock for "auto-20220516225301-2444" in 0s
	I0516 23:01:23.317627    4548 start.go:91] Provisioning new machine with config: &{Name:auto-20220516225301-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:auto-20220516225301-2444 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 23:01:23.317802    4548 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:01:23.322822    4548 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:01:23.324997    4548 start.go:165] libmachine.API.Create for "auto-20220516225301-2444" (driver="docker")
	I0516 23:01:23.324997    4548 client.go:168] LocalClient.Create starting
	I0516 23:01:23.324997    4548 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:01:23.324997    4548 main.go:134] libmachine: Decoding PEM data...
	I0516 23:01:23.325945    4548 main.go:134] libmachine: Parsing certificate...
	I0516 23:01:23.326179    4548 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:01:23.326225    4548 main.go:134] libmachine: Decoding PEM data...
	I0516 23:01:23.326225    4548 main.go:134] libmachine: Parsing certificate...
	I0516 23:01:23.340821    4548 cli_runner.go:164] Run: docker network inspect auto-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:01:24.409273    4548 cli_runner.go:211] docker network inspect auto-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:01:24.409273    4548 cli_runner.go:217] Completed: docker network inspect auto-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0684428s)
	I0516 23:01:24.417882    4548 network_create.go:272] running [docker network inspect auto-20220516225301-2444] to gather additional debugging logs...
	I0516 23:01:24.417882    4548 cli_runner.go:164] Run: docker network inspect auto-20220516225301-2444
	W0516 23:01:25.500563    4548 cli_runner.go:211] docker network inspect auto-20220516225301-2444 returned with exit code 1
	I0516 23:01:25.500563    4548 cli_runner.go:217] Completed: docker network inspect auto-20220516225301-2444: (1.0826713s)
	I0516 23:01:25.500563    4548 network_create.go:275] error running [docker network inspect auto-20220516225301-2444]: docker network inspect auto-20220516225301-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220516225301-2444
	I0516 23:01:25.500563    4548 network_create.go:277] output of [docker network inspect auto-20220516225301-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220516225301-2444
	
	** /stderr **
	I0516 23:01:25.509078    4548 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:01:26.607264    4548 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0981192s)
	I0516 23:01:26.630542    4548 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006368] misses:0}
	I0516 23:01:26.630542    4548 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:26.630727    4548 network_create.go:115] attempt to create docker network auto-20220516225301-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 23:01:26.642164    4548 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444
	W0516 23:01:27.729373    4548 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444 returned with exit code 1
	I0516 23:01:27.729400    4548 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444: (1.0871264s)
	W0516 23:01:27.729400    4548 network_create.go:107] failed to create docker network auto-20220516225301-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 23:01:27.750028    4548 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006368] amended:false}} dirty:map[] misses:0}
	I0516 23:01:27.750028    4548 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:27.771205    4548 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006368] amended:true}} dirty:map[192.168.49.0:0xc000006368 192.168.58.0:0xc0003be278] misses:0}
	I0516 23:01:27.771205    4548 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:27.771761    4548 network_create.go:115] attempt to create docker network auto-20220516225301-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 23:01:27.778764    4548 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444
	W0516 23:01:28.858565    4548 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444 returned with exit code 1
	I0516 23:01:28.858565    4548 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444: (1.0795681s)
	W0516 23:01:28.858565    4548 network_create.go:107] failed to create docker network auto-20220516225301-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 23:01:28.886635    4548 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006368] amended:true}} dirty:map[192.168.49.0:0xc000006368 192.168.58.0:0xc0003be278] misses:1}
	I0516 23:01:28.886898    4548 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:28.904236    4548 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006368] amended:true}} dirty:map[192.168.49.0:0xc000006368 192.168.58.0:0xc0003be278 192.168.67.0:0xc00061a2a0] misses:1}
	I0516 23:01:28.905232    4548 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:28.905232    4548 network_create.go:115] attempt to create docker network auto-20220516225301-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 23:01:28.912245    4548 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444
	W0516 23:01:29.998738    4548 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444 returned with exit code 1
	I0516 23:01:29.998874    4548 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444: (1.086346s)
	W0516 23:01:29.998874    4548 network_create.go:107] failed to create docker network auto-20220516225301-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 23:01:30.018197    4548 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006368] amended:true}} dirty:map[192.168.49.0:0xc000006368 192.168.58.0:0xc0003be278 192.168.67.0:0xc00061a2a0] misses:2}
	I0516 23:01:30.018197    4548 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:30.037321    4548 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006368] amended:true}} dirty:map[192.168.49.0:0xc000006368 192.168.58.0:0xc0003be278 192.168.67.0:0xc00061a2a0 192.168.76.0:0xc000006490] misses:2}
	I0516 23:01:30.037321    4548 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:30.037321    4548 network_create.go:115] attempt to create docker network auto-20220516225301-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 23:01:30.045747    4548 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444
	W0516 23:01:31.173698    4548 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444 returned with exit code 1
	I0516 23:01:31.173698    4548 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444: (1.1279099s)
	E0516 23:01:31.173698    4548 network_create.go:104] error while trying to create docker network auto-20220516225301-2444 192.168.76.0/24: create docker network auto-20220516225301-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 57607ec3eddf1888d183972c19abea197cfc9af5637a5c7febbc741122bd03ed (br-57607ec3eddf): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 23:01:31.173698    4548 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network auto-20220516225301-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 57607ec3eddf1888d183972c19abea197cfc9af5637a5c7febbc741122bd03ed (br-57607ec3eddf): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network auto-20220516225301-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 57607ec3eddf1888d183972c19abea197cfc9af5637a5c7febbc741122bd03ed (br-57607ec3eddf): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 23:01:31.193287    4548 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:01:32.298368    4548 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1048942s)
	I0516 23:01:32.308036    4548 cli_runner.go:164] Run: docker volume create auto-20220516225301-2444 --label name.minikube.sigs.k8s.io=auto-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:01:33.409414    4548 cli_runner.go:211] docker volume create auto-20220516225301-2444 --label name.minikube.sigs.k8s.io=auto-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:01:33.409414    4548 cli_runner.go:217] Completed: docker volume create auto-20220516225301-2444 --label name.minikube.sigs.k8s.io=auto-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: (1.1012385s)
	I0516 23:01:33.409414    4548 client.go:171] LocalClient.Create took 10.084332s
	I0516 23:01:35.432412    4548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:01:35.439494    4548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444
	W0516 23:01:36.566685    4548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444 returned with exit code 1
	I0516 23:01:36.566685    4548 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: (1.1271817s)
	I0516 23:01:36.566685    4548 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:01:36.860934    4548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444
	W0516 23:01:37.901553    4548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444 returned with exit code 1
	I0516 23:01:37.901553    4548 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: (1.0406111s)
	W0516 23:01:37.901553    4548 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	
	W0516 23:01:37.901553    4548 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:01:37.913544    4548 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:01:37.921541    4548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444
	W0516 23:01:38.988012    4548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444 returned with exit code 1
	I0516 23:01:38.988215    4548 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: (1.065416s)
	I0516 23:01:38.988215    4548 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:01:39.298148    4548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444
	W0516 23:01:40.359888    4548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444 returned with exit code 1
	I0516 23:01:40.359888    4548 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: (1.0617303s)
	W0516 23:01:40.359888    4548 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	
	W0516 23:01:40.359888    4548 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:01:40.359888    4548 start.go:134] duration metric: createHost completed in 17.0419408s
	I0516 23:01:40.359888    4548 start.go:81] releasing machines lock for "auto-20220516225301-2444", held for 17.0427508s
	W0516 23:01:40.359888    4548 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for auto-20220516225301-2444 container: docker volume create auto-20220516225301-2444 --label name.minikube.sigs.k8s.io=auto-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/auto-20220516225301-2444': mkdir /var/lib/docker/volumes/auto-20220516225301-2444: read-only file system
	I0516 23:01:40.374887    4548 cli_runner.go:164] Run: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}
	W0516 23:01:41.433109    4548 cli_runner.go:211] docker container inspect auto-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:41.433109    4548 cli_runner.go:217] Completed: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: (1.0582131s)
	I0516 23:01:41.433109    4548 delete.go:82] Unable to get host status for auto-20220516225301-2444, assuming it has already been deleted: state: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	W0516 23:01:41.433109    4548 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for auto-20220516225301-2444 container: docker volume create auto-20220516225301-2444 --label name.minikube.sigs.k8s.io=auto-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/auto-20220516225301-2444': mkdir /var/lib/docker/volumes/auto-20220516225301-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for auto-20220516225301-2444 container: docker volume create auto-20220516225301-2444 --label name.minikube.sigs.k8s.io=auto-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/auto-20220516225301-2444': mkdir /var/lib/docker/volumes/auto-20220516225301-2444: read-only file system
	
	I0516 23:01:41.433109    4548 start.go:623] Will try again in 5 seconds ...
	I0516 23:01:46.442434    4548 start.go:352] acquiring machines lock for auto-20220516225301-2444: {Name:mkd99cabc395769e899f2a04225201db6b5d24ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:01:46.442854    4548 start.go:356] acquired machines lock for "auto-20220516225301-2444" in 224.1µs
	I0516 23:01:46.442854    4548 start.go:94] Skipping create...Using existing machine configuration
	I0516 23:01:46.442854    4548 fix.go:55] fixHost starting: 
	I0516 23:01:46.461440    4548 cli_runner.go:164] Run: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}
	W0516 23:01:47.558823    4548 cli_runner.go:211] docker container inspect auto-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:47.558823    4548 cli_runner.go:217] Completed: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: (1.0973741s)
	I0516 23:01:47.558823    4548 fix.go:103] recreateIfNeeded on auto-20220516225301-2444: state= err=unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:01:47.558993    4548 fix.go:108] machineExists: false. err=machine does not exist
	I0516 23:01:47.561772    4548 out.go:177] * docker "auto-20220516225301-2444" container is missing, will recreate.
	I0516 23:01:47.565957    4548 delete.go:124] DEMOLISHING auto-20220516225301-2444 ...
	I0516 23:01:47.579316    4548 cli_runner.go:164] Run: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}
	W0516 23:01:48.681303    4548 cli_runner.go:211] docker container inspect auto-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:48.681303    4548 cli_runner.go:217] Completed: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: (1.1019774s)
	W0516 23:01:48.681468    4548 stop.go:75] unable to get state: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:01:48.681468    4548 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:01:48.697982    4548 cli_runner.go:164] Run: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}
	W0516 23:01:49.840621    4548 cli_runner.go:211] docker container inspect auto-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:49.840621    4548 cli_runner.go:217] Completed: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: (1.142629s)
	I0516 23:01:49.840621    4548 delete.go:82] Unable to get host status for auto-20220516225301-2444, assuming it has already been deleted: state: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:01:49.849142    4548 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-20220516225301-2444
	W0516 23:01:51.016657    4548 cli_runner.go:211] docker container inspect -f {{.Id}} auto-20220516225301-2444 returned with exit code 1
	I0516 23:01:51.016657    4548 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} auto-20220516225301-2444: (1.166506s)
	I0516 23:01:51.016657    4548 kic.go:356] could not find the container auto-20220516225301-2444 to remove it. will try anyways
	I0516 23:01:51.024979    4548 cli_runner.go:164] Run: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}
	W0516 23:01:52.155753    4548 cli_runner.go:211] docker container inspect auto-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:52.155753    4548 cli_runner.go:217] Completed: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: (1.1307649s)
	W0516 23:01:52.155753    4548 oci.go:84] error getting container status, will try to delete anyways: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:01:52.165801    4548 cli_runner.go:164] Run: docker exec --privileged -t auto-20220516225301-2444 /bin/bash -c "sudo init 0"
	W0516 23:01:53.256158    4548 cli_runner.go:211] docker exec --privileged -t auto-20220516225301-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 23:01:53.256360    4548 cli_runner.go:217] Completed: docker exec --privileged -t auto-20220516225301-2444 /bin/bash -c "sudo init 0": (1.0903476s)
	I0516 23:01:53.256360    4548 oci.go:641] error shutdown auto-20220516225301-2444: docker exec --privileged -t auto-20220516225301-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:01:54.275508    4548 cli_runner.go:164] Run: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}
	W0516 23:01:55.326191    4548 cli_runner.go:211] docker container inspect auto-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:55.326191    4548 cli_runner.go:217] Completed: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: (1.0505398s)
	I0516 23:01:55.326191    4548 oci.go:653] temporary error verifying shutdown: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:01:55.326191    4548 oci.go:655] temporary error: container auto-20220516225301-2444 status is  but expect it to be exited
	I0516 23:01:55.326191    4548 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:01:55.808389    4548 cli_runner.go:164] Run: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}
	W0516 23:01:56.863407    4548 cli_runner.go:211] docker container inspect auto-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:56.863443    4548 cli_runner.go:217] Completed: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: (1.0548735s)
	I0516 23:01:56.863635    4548 oci.go:653] temporary error verifying shutdown: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:01:56.863750    4548 oci.go:655] temporary error: container auto-20220516225301-2444 status is  but expect it to be exited
	I0516 23:01:56.863806    4548 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:01:57.769994    4548 cli_runner.go:164] Run: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}
	W0516 23:01:58.861660    4548 cli_runner.go:211] docker container inspect auto-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:01:58.861789    4548 cli_runner.go:217] Completed: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: (1.0915113s)
	I0516 23:01:58.861789    4548 oci.go:653] temporary error verifying shutdown: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:01:58.861789    4548 oci.go:655] temporary error: container auto-20220516225301-2444 status is  but expect it to be exited
	I0516 23:01:58.861789    4548 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:01:59.516349    4548 cli_runner.go:164] Run: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}
	W0516 23:02:00.652451    4548 cli_runner.go:211] docker container inspect auto-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:00.652451    4548 cli_runner.go:217] Completed: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: (1.1360581s)
	I0516 23:02:00.652451    4548 oci.go:653] temporary error verifying shutdown: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:02:00.652451    4548 oci.go:655] temporary error: container auto-20220516225301-2444 status is  but expect it to be exited
	I0516 23:02:00.652451    4548 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:02:01.783110    4548 cli_runner.go:164] Run: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}
	W0516 23:02:02.861320    4548 cli_runner.go:211] docker container inspect auto-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:02.861320    4548 cli_runner.go:217] Completed: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: (1.0779284s)
	I0516 23:02:02.861320    4548 oci.go:653] temporary error verifying shutdown: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:02:02.861320    4548 oci.go:655] temporary error: container auto-20220516225301-2444 status is  but expect it to be exited
	I0516 23:02:02.861320    4548 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:02:04.390145    4548 cli_runner.go:164] Run: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}
	W0516 23:02:05.482141    4548 cli_runner.go:211] docker container inspect auto-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:05.482391    4548 cli_runner.go:217] Completed: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: (1.09183s)
	I0516 23:02:05.482464    4548 oci.go:653] temporary error verifying shutdown: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:02:05.482494    4548 oci.go:655] temporary error: container auto-20220516225301-2444 status is  but expect it to be exited
	I0516 23:02:05.482541    4548 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:02:08.540677    4548 cli_runner.go:164] Run: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}
	W0516 23:02:09.630553    4548 cli_runner.go:211] docker container inspect auto-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:09.630553    4548 cli_runner.go:217] Completed: docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: (1.0898666s)
	I0516 23:02:09.630553    4548 oci.go:653] temporary error verifying shutdown: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:02:09.630553    4548 oci.go:655] temporary error: container auto-20220516225301-2444 status is  but expect it to be exited
	I0516 23:02:09.630553    4548 oci.go:88] couldn't shut down auto-20220516225301-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "auto-20220516225301-2444": docker container inspect auto-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	 
	I0516 23:02:09.637976    4548 cli_runner.go:164] Run: docker rm -f -v auto-20220516225301-2444
	I0516 23:02:10.745442    4548 cli_runner.go:217] Completed: docker rm -f -v auto-20220516225301-2444: (1.1074561s)
	I0516 23:02:10.753439    4548 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-20220516225301-2444
	W0516 23:02:11.841049    4548 cli_runner.go:211] docker container inspect -f {{.Id}} auto-20220516225301-2444 returned with exit code 1
	I0516 23:02:11.841049    4548 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} auto-20220516225301-2444: (1.0876012s)
	I0516 23:02:11.848034    4548 cli_runner.go:164] Run: docker network inspect auto-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:02:12.910547    4548 cli_runner.go:211] docker network inspect auto-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:02:12.910547    4548 cli_runner.go:217] Completed: docker network inspect auto-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0625048s)
	I0516 23:02:12.919829    4548 network_create.go:272] running [docker network inspect auto-20220516225301-2444] to gather additional debugging logs...
	I0516 23:02:12.919829    4548 cli_runner.go:164] Run: docker network inspect auto-20220516225301-2444
	W0516 23:02:14.028055    4548 cli_runner.go:211] docker network inspect auto-20220516225301-2444 returned with exit code 1
	I0516 23:02:14.028127    4548 cli_runner.go:217] Completed: docker network inspect auto-20220516225301-2444: (1.1081773s)
	I0516 23:02:14.028158    4548 network_create.go:275] error running [docker network inspect auto-20220516225301-2444]: docker network inspect auto-20220516225301-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220516225301-2444
	I0516 23:02:14.028158    4548 network_create.go:277] output of [docker network inspect auto-20220516225301-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220516225301-2444
	
	** /stderr **
	W0516 23:02:14.029192    4548 delete.go:139] delete failed (probably ok) <nil>
	I0516 23:02:14.029263    4548 fix.go:115] Sleeping 1 second for extra luck!
	I0516 23:02:15.041582    4548 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:02:15.045979    4548 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:02:15.046300    4548 start.go:165] libmachine.API.Create for "auto-20220516225301-2444" (driver="docker")
	I0516 23:02:15.046380    4548 client.go:168] LocalClient.Create starting
	I0516 23:02:15.047024    4548 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:02:15.047359    4548 main.go:134] libmachine: Decoding PEM data...
	I0516 23:02:15.047359    4548 main.go:134] libmachine: Parsing certificate...
	I0516 23:02:15.047626    4548 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:02:15.047903    4548 main.go:134] libmachine: Decoding PEM data...
	I0516 23:02:15.047942    4548 main.go:134] libmachine: Parsing certificate...
	I0516 23:02:15.059641    4548 cli_runner.go:164] Run: docker network inspect auto-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:02:16.147014    4548 cli_runner.go:211] docker network inspect auto-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:02:16.147014    4548 cli_runner.go:217] Completed: docker network inspect auto-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.087296s)
	I0516 23:02:16.154993    4548 network_create.go:272] running [docker network inspect auto-20220516225301-2444] to gather additional debugging logs...
	I0516 23:02:16.154993    4548 cli_runner.go:164] Run: docker network inspect auto-20220516225301-2444
	W0516 23:02:17.236636    4548 cli_runner.go:211] docker network inspect auto-20220516225301-2444 returned with exit code 1
	I0516 23:02:17.236775    4548 cli_runner.go:217] Completed: docker network inspect auto-20220516225301-2444: (1.081634s)
	I0516 23:02:17.236840    4548 network_create.go:275] error running [docker network inspect auto-20220516225301-2444]: docker network inspect auto-20220516225301-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220516225301-2444
	I0516 23:02:17.236840    4548 network_create.go:277] output of [docker network inspect auto-20220516225301-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220516225301-2444
	
	** /stderr **
	I0516 23:02:17.244442    4548 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:02:18.330941    4548 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0864143s)
	I0516 23:02:18.347591    4548 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006368] amended:true}} dirty:map[192.168.49.0:0xc000006368 192.168.58.0:0xc0003be278 192.168.67.0:0xc00061a2a0 192.168.76.0:0xc000006490] misses:2}
	I0516 23:02:18.347591    4548 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:02:18.363690    4548 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006368] amended:true}} dirty:map[192.168.49.0:0xc000006368 192.168.58.0:0xc0003be278 192.168.67.0:0xc00061a2a0 192.168.76.0:0xc000006490] misses:3}
	I0516 23:02:18.363690    4548 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:02:18.380210    4548 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006368 192.168.58.0:0xc0003be278 192.168.67.0:0xc00061a2a0 192.168.76.0:0xc000006490] amended:false}} dirty:map[] misses:0}
	I0516 23:02:18.380210    4548 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:02:18.395382    4548 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006368 192.168.58.0:0xc0003be278 192.168.67.0:0xc00061a2a0 192.168.76.0:0xc000006490] amended:false}} dirty:map[] misses:0}
	I0516 23:02:18.395382    4548 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:02:18.410763    4548 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006368 192.168.58.0:0xc0003be278 192.168.67.0:0xc00061a2a0 192.168.76.0:0xc000006490] amended:true}} dirty:map[192.168.49.0:0xc000006368 192.168.58.0:0xc0003be278 192.168.67.0:0xc00061a2a0 192.168.76.0:0xc000006490 192.168.85.0:0xc0000064b0] misses:0}
	I0516 23:02:18.410763    4548 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:02:18.410763    4548 network_create.go:115] attempt to create docker network auto-20220516225301-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 23:02:18.419556    4548 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444
	W0516 23:02:19.478838    4548 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444 returned with exit code 1
	I0516 23:02:19.478894    4548 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444: (1.0580687s)
	E0516 23:02:19.478894    4548 network_create.go:104] error while trying to create docker network auto-20220516225301-2444 192.168.85.0/24: create docker network auto-20220516225301-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network aa51d20cd16b74c05a0cfc4bc3e100cec46da379530674d115cb498de813879e (br-aa51d20cd16b): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 23:02:19.478894    4548 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network auto-20220516225301-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network aa51d20cd16b74c05a0cfc4bc3e100cec46da379530674d115cb498de813879e (br-aa51d20cd16b): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network auto-20220516225301-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network aa51d20cd16b74c05a0cfc4bc3e100cec46da379530674d115cb498de813879e (br-aa51d20cd16b): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 23:02:19.496827    4548 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:02:20.577998    4548 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0811623s)
	I0516 23:02:20.584999    4548 cli_runner.go:164] Run: docker volume create auto-20220516225301-2444 --label name.minikube.sigs.k8s.io=auto-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:02:21.661158    4548 cli_runner.go:211] docker volume create auto-20220516225301-2444 --label name.minikube.sigs.k8s.io=auto-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:02:21.661392    4548 cli_runner.go:217] Completed: docker volume create auto-20220516225301-2444 --label name.minikube.sigs.k8s.io=auto-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: (1.07615s)
	I0516 23:02:21.661392    4548 client.go:171] LocalClient.Create took 6.6149067s
	I0516 23:02:23.691244    4548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:02:23.699909    4548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444
	W0516 23:02:24.871163    4548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444 returned with exit code 1
	I0516 23:02:24.871163    4548 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: (1.1711094s)
	I0516 23:02:24.871163    4548 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:02:25.212327    4548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444
	W0516 23:02:26.263884    4548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444 returned with exit code 1
	I0516 23:02:26.263884    4548 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: (1.0512211s)
	W0516 23:02:26.263884    4548 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	
	W0516 23:02:26.263884    4548 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:02:26.276281    4548 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:02:26.284281    4548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444
	W0516 23:02:27.397169    4548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444 returned with exit code 1
	I0516 23:02:27.397483    4548 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: (1.1128788s)
	I0516 23:02:27.397704    4548 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:02:27.629902    4548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444
	W0516 23:02:28.724319    4548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444 returned with exit code 1
	I0516 23:02:28.724551    4548 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: (1.0943459s)
	W0516 23:02:28.724752    4548 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	
	W0516 23:02:28.724817    4548 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:02:28.724817    4548 start.go:134] duration metric: createHost completed in 13.6829216s
	I0516 23:02:28.740034    4548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:02:28.750614    4548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444
	W0516 23:02:29.819202    4548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444 returned with exit code 1
	I0516 23:02:29.819202    4548 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: (1.0685786s)
	I0516 23:02:29.819202    4548 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:02:30.083341    4548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444
	W0516 23:02:31.220291    4548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444 returned with exit code 1
	I0516 23:02:31.220291    4548 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: (1.1369399s)
	W0516 23:02:31.220291    4548 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	
	W0516 23:02:31.220291    4548 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:02:31.231090    4548 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:02:31.238090    4548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444
	W0516 23:02:32.375375    4548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444 returned with exit code 1
	I0516 23:02:32.375465    4548 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: (1.1371058s)
	I0516 23:02:32.375649    4548 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:02:32.592722    4548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444
	W0516 23:02:33.658493    4548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444 returned with exit code 1
	I0516 23:02:33.658493    4548 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: (1.065762s)
	W0516 23:02:33.658493    4548 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	
	W0516 23:02:33.658493    4548 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220516225301-2444
	I0516 23:02:33.658493    4548 fix.go:57] fixHost completed within 47.2152346s
	I0516 23:02:33.658493    4548 start.go:81] releasing machines lock for "auto-20220516225301-2444", held for 47.2152346s
	W0516 23:02:33.659181    4548 out.go:239] * Failed to start docker container. Running "minikube delete -p auto-20220516225301-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for auto-20220516225301-2444 container: docker volume create auto-20220516225301-2444 --label name.minikube.sigs.k8s.io=auto-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/auto-20220516225301-2444': mkdir /var/lib/docker/volumes/auto-20220516225301-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p auto-20220516225301-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for auto-20220516225301-2444 container: docker volume create auto-20220516225301-2444 --label name.minikube.sigs.k8s.io=auto-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/auto-20220516225301-2444': mkdir /var/lib/docker/volumes/auto-20220516225301-2444: read-only file system
	
	I0516 23:02:33.663670    4548 out.go:177] 
	W0516 23:02:33.666103    4548 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for auto-20220516225301-2444 container: docker volume create auto-20220516225301-2444 --label name.minikube.sigs.k8s.io=auto-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/auto-20220516225301-2444': mkdir /var/lib/docker/volumes/auto-20220516225301-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for auto-20220516225301-2444 container: docker volume create auto-20220516225301-2444 --label name.minikube.sigs.k8s.io=auto-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/auto-20220516225301-2444': mkdir /var/lib/docker/volumes/auto-20220516225301-2444: read-only file system
	
	W0516 23:02:33.666103    4548 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 23:02:33.666741    4548 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 23:02:33.670513    4548 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/auto/Start (81.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (81.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-20220516225309-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p false-20220516225309-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: exit status 60 (1m21.0724142s)

                                                
                                                
-- stdout --
	* [false-20220516225309-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node false-20220516225309-2444 in cluster false-20220516225309-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "false-20220516225309-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:01:31.960245    6112 out.go:296] Setting OutFile to fd 1516 ...
	I0516 23:01:32.018170    6112 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:01:32.018170    6112 out.go:309] Setting ErrFile to fd 1480...
	I0516 23:01:32.018170    6112 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:01:32.035029    6112 out.go:303] Setting JSON to false
	I0516 23:01:32.043343    6112 start.go:115] hostinfo: {"hostname":"minikube2","uptime":5204,"bootTime":1652736888,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 23:01:32.043926    6112 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 23:01:32.049133    6112 out.go:177] * [false-20220516225309-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 23:01:32.052484    6112 notify.go:193] Checking for updates...
	I0516 23:01:32.054803    6112 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 23:01:32.056928    6112 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 23:01:32.059589    6112 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 23:01:32.061204    6112 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 23:01:32.064790    6112 config.go:178] Loaded profile config "auto-20220516225301-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:01:32.065752    6112 config.go:178] Loaded profile config "default-k8s-different-port-20220516230045-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:01:32.066425    6112 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:01:32.066571    6112 config.go:178] Loaded profile config "newest-cni-20220516230100-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:01:32.066571    6112 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 23:01:34.707799    6112 docker.go:137] docker version: linux-20.10.14
	I0516 23:01:34.717058    6112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:01:36.776417    6112 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.059049s)
	I0516 23:01:36.777037    6112 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:01:35.7180161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:01:36.780121    6112 out.go:177] * Using the docker driver based on user configuration
	I0516 23:01:36.783751    6112 start.go:284] selected driver: docker
	I0516 23:01:36.783751    6112 start.go:806] validating driver "docker" against <nil>
	I0516 23:01:36.783751    6112 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 23:01:36.850286    6112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:01:38.925702    6112 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0753983s)
	I0516 23:01:38.925702    6112 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:01:37.8783926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:01:38.925702    6112 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 23:01:38.926706    6112 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 23:01:38.931702    6112 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 23:01:38.933711    6112 cni.go:95] Creating CNI manager for "false"
	I0516 23:01:38.933711    6112 start_flags.go:306] config:
	{Name:false-20220516225309-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:false-20220516225309-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 23:01:38.937711    6112 out.go:177] * Starting control plane node false-20220516225309-2444 in cluster false-20220516225309-2444
	I0516 23:01:38.939703    6112 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 23:01:38.941702    6112 out.go:177] * Pulling base image ...
	I0516 23:01:38.945708    6112 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 23:01:38.945708    6112 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 23:01:38.945708    6112 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 23:01:38.945708    6112 cache.go:57] Caching tarball of preloaded images
	I0516 23:01:38.946711    6112 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 23:01:38.946711    6112 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 23:01:38.946711    6112 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-20220516225309-2444\config.json ...
	I0516 23:01:38.946711    6112 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-20220516225309-2444\config.json: {Name:mk84eb8d4038dad3bcedf2d7ec024ba712060636 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 23:01:40.029017    6112 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 23:01:40.029017    6112 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:01:40.029017    6112 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:01:40.029017    6112 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 23:01:40.029017    6112 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 23:01:40.029017    6112 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 23:01:40.029602    6112 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 23:01:40.029602    6112 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 23:01:40.029602    6112 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:01:42.357456    6112 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 23:01:42.357543    6112 cache.go:206] Successfully downloaded all kic artifacts
	I0516 23:01:42.357639    6112 start.go:352] acquiring machines lock for false-20220516225309-2444: {Name:mke3081f3e41aa2f1cdc880ab726432495dfd525 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:01:42.357958    6112 start.go:356] acquired machines lock for "false-20220516225309-2444" in 250.6µs
	I0516 23:01:42.357958    6112 start.go:91] Provisioning new machine with config: &{Name:false-20220516225309-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:false-20220516225309-2444 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 23:01:42.357958    6112 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:01:42.363154    6112 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:01:42.363800    6112 start.go:165] libmachine.API.Create for "false-20220516225309-2444" (driver="docker")
	I0516 23:01:42.363876    6112 client.go:168] LocalClient.Create starting
	I0516 23:01:42.364049    6112 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:01:42.364580    6112 main.go:134] libmachine: Decoding PEM data...
	I0516 23:01:42.364709    6112 main.go:134] libmachine: Parsing certificate...
	I0516 23:01:42.364902    6112 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:01:42.364902    6112 main.go:134] libmachine: Decoding PEM data...
	I0516 23:01:42.364902    6112 main.go:134] libmachine: Parsing certificate...
	I0516 23:01:42.376430    6112 cli_runner.go:164] Run: docker network inspect false-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:01:43.450135    6112 cli_runner.go:211] docker network inspect false-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:01:43.450180    6112 cli_runner.go:217] Completed: docker network inspect false-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0734907s)
	I0516 23:01:43.458379    6112 network_create.go:272] running [docker network inspect false-20220516225309-2444] to gather additional debugging logs...
	I0516 23:01:43.458379    6112 cli_runner.go:164] Run: docker network inspect false-20220516225309-2444
	W0516 23:01:44.539191    6112 cli_runner.go:211] docker network inspect false-20220516225309-2444 returned with exit code 1
	I0516 23:01:44.539191    6112 cli_runner.go:217] Completed: docker network inspect false-20220516225309-2444: (1.080803s)
	I0516 23:01:44.539191    6112 network_create.go:275] error running [docker network inspect false-20220516225309-2444]: docker network inspect false-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220516225309-2444
	I0516 23:01:44.539191    6112 network_create.go:277] output of [docker network inspect false-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220516225309-2444
	
	** /stderr **
	I0516 23:01:44.546191    6112 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:01:45.625055    6112 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0788175s)
	I0516 23:01:45.648536    6112 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00010a378] misses:0}
	I0516 23:01:45.648698    6112 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:45.648698    6112 network_create.go:115] attempt to create docker network false-20220516225309-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 23:01:45.655960    6112 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444
	W0516 23:01:46.735704    6112 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444 returned with exit code 1
	I0516 23:01:46.735704    6112 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444: (1.0797352s)
	W0516 23:01:46.735704    6112 network_create.go:107] failed to create docker network false-20220516225309-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 23:01:46.754249    6112 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010a378] amended:false}} dirty:map[] misses:0}
	I0516 23:01:46.755242    6112 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:46.774889    6112 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010a378] amended:true}} dirty:map[192.168.49.0:0xc00010a378 192.168.58.0:0xc0005b77f0] misses:0}
	I0516 23:01:46.774889    6112 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:46.774889    6112 network_create.go:115] attempt to create docker network false-20220516225309-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 23:01:46.784028    6112 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444
	W0516 23:01:47.868271    6112 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444 returned with exit code 1
	I0516 23:01:47.868640    6112 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444: (1.0842337s)
	W0516 23:01:47.868701    6112 network_create.go:107] failed to create docker network false-20220516225309-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 23:01:47.888434    6112 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010a378] amended:true}} dirty:map[192.168.49.0:0xc00010a378 192.168.58.0:0xc0005b77f0] misses:1}
	I0516 23:01:47.888434    6112 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:47.908685    6112 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010a378] amended:true}} dirty:map[192.168.49.0:0xc00010a378 192.168.58.0:0xc0005b77f0 192.168.67.0:0xc00065a178] misses:1}
	I0516 23:01:47.909047    6112 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:47.909074    6112 network_create.go:115] attempt to create docker network false-20220516225309-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 23:01:47.918303    6112 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444
	W0516 23:01:49.024245    6112 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444 returned with exit code 1
	I0516 23:01:49.024245    6112 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444: (1.1049583s)
	W0516 23:01:49.024245    6112 network_create.go:107] failed to create docker network false-20220516225309-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 23:01:49.043227    6112 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010a378] amended:true}} dirty:map[192.168.49.0:0xc00010a378 192.168.58.0:0xc0005b77f0 192.168.67.0:0xc00065a178] misses:2}
	I0516 23:01:49.043227    6112 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:49.062478    6112 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010a378] amended:true}} dirty:map[192.168.49.0:0xc00010a378 192.168.58.0:0xc0005b77f0 192.168.67.0:0xc00065a178 192.168.76.0:0xc0005b7888] misses:2}
	I0516 23:01:49.063021    6112 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:01:49.063081    6112 network_create.go:115] attempt to create docker network false-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 23:01:49.071544    6112 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444
	W0516 23:01:50.218515    6112 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444 returned with exit code 1
	I0516 23:01:50.218663    6112 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444: (1.146854s)
	E0516 23:01:50.218730    6112 network_create.go:104] error while trying to create docker network false-20220516225309-2444 192.168.76.0/24: create docker network false-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 81c20911b9635fb5f3ba7cbfb3634cf00913f79a0bcc5dab536b11b910699518 (br-81c20911b963): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 23:01:50.218861    6112 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network false-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 81c20911b9635fb5f3ba7cbfb3634cf00913f79a0bcc5dab536b11b910699518 (br-81c20911b963): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network false-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 81c20911b9635fb5f3ba7cbfb3634cf00913f79a0bcc5dab536b11b910699518 (br-81c20911b963): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 23:01:50.236023    6112 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:01:51.391968    6112 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1559354s)
	I0516 23:01:51.402106    6112 cli_runner.go:164] Run: docker volume create false-20220516225309-2444 --label name.minikube.sigs.k8s.io=false-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:01:52.563004    6112 cli_runner.go:211] docker volume create false-20220516225309-2444 --label name.minikube.sigs.k8s.io=false-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:01:52.563071    6112 cli_runner.go:217] Completed: docker volume create false-20220516225309-2444 --label name.minikube.sigs.k8s.io=false-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: (1.1606742s)
	I0516 23:01:52.563169    6112 client.go:171] LocalClient.Create took 10.199174s
	I0516 23:01:54.578015    6112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:01:54.584099    6112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444
	W0516 23:01:55.675715    6112 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444 returned with exit code 1
	I0516 23:01:55.675715    6112 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: (1.0916061s)
	I0516 23:01:55.675715    6112 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:01:55.962695    6112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444
	W0516 23:01:57.020061    6112 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444 returned with exit code 1
	I0516 23:01:57.020061    6112 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: (1.0568305s)
	W0516 23:01:57.023473    6112 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	
	W0516 23:01:57.023473    6112 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:01:57.033356    6112 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:01:57.040831    6112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444
	W0516 23:01:58.101065    6112 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444 returned with exit code 1
	I0516 23:01:58.101065    6112 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: (1.0602248s)
	I0516 23:01:58.101065    6112 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:01:58.415223    6112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444
	W0516 23:01:59.492065    6112 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444 returned with exit code 1
	I0516 23:01:59.492065    6112 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: (1.0767498s)
	W0516 23:01:59.492065    6112 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	
	W0516 23:01:59.492065    6112 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:01:59.492065    6112 start.go:134] duration metric: createHost completed in 17.133961s
	I0516 23:01:59.492065    6112 start.go:81] releasing machines lock for "false-20220516225309-2444", held for 17.133961s
	W0516 23:01:59.492065    6112 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for false-20220516225309-2444 container: docker volume create false-20220516225309-2444 --label name.minikube.sigs.k8s.io=false-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/false-20220516225309-2444': mkdir /var/lib/docker/volumes/false-20220516225309-2444: read-only file system
	I0516 23:01:59.510115    6112 cli_runner.go:164] Run: docker container inspect false-20220516225309-2444 --format={{.State.Status}}
	W0516 23:02:00.636393    6112 cli_runner.go:211] docker container inspect false-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:00.636393    6112 cli_runner.go:217] Completed: docker container inspect false-20220516225309-2444 --format={{.State.Status}}: (1.1262684s)
	I0516 23:02:00.636393    6112 delete.go:82] Unable to get host status for false-20220516225309-2444, assuming it has already been deleted: state: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	W0516 23:02:00.636393    6112 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for false-20220516225309-2444 container: docker volume create false-20220516225309-2444 --label name.minikube.sigs.k8s.io=false-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/false-20220516225309-2444': mkdir /var/lib/docker/volumes/false-20220516225309-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for false-20220516225309-2444 container: docker volume create false-20220516225309-2444 --label name.minikube.sigs.k8s.io=false-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/false-20220516225309-2444': mkdir /var/lib/docker/volumes/false-20220516225309-2444: read-only file system
	
	I0516 23:02:00.636393    6112 start.go:623] Will try again in 5 seconds ...
	I0516 23:02:05.641845    6112 start.go:352] acquiring machines lock for false-20220516225309-2444: {Name:mke3081f3e41aa2f1cdc880ab726432495dfd525 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:02:05.641845    6112 start.go:356] acquired machines lock for "false-20220516225309-2444" in 0s
	I0516 23:02:05.641845    6112 start.go:94] Skipping create...Using existing machine configuration
	I0516 23:02:05.641845    6112 fix.go:55] fixHost starting: 
	I0516 23:02:05.657589    6112 cli_runner.go:164] Run: docker container inspect false-20220516225309-2444 --format={{.State.Status}}
	W0516 23:02:06.735622    6112 cli_runner.go:211] docker container inspect false-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:06.735622    6112 cli_runner.go:217] Completed: docker container inspect false-20220516225309-2444 --format={{.State.Status}}: (1.0779935s)
	I0516 23:02:06.735891    6112 fix.go:103] recreateIfNeeded on false-20220516225309-2444: state= err=unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:06.735921    6112 fix.go:108] machineExists: false. err=machine does not exist
	I0516 23:02:06.740133    6112 out.go:177] * docker "false-20220516225309-2444" container is missing, will recreate.
	I0516 23:02:06.742143    6112 delete.go:124] DEMOLISHING false-20220516225309-2444 ...
	I0516 23:02:06.757420    6112 cli_runner.go:164] Run: docker container inspect false-20220516225309-2444 --format={{.State.Status}}
	W0516 23:02:07.922730    6112 cli_runner.go:211] docker container inspect false-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:07.922886    6112 cli_runner.go:217] Completed: docker container inspect false-20220516225309-2444 --format={{.State.Status}}: (1.1651591s)
	W0516 23:02:07.922933    6112 stop.go:75] unable to get state: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:07.922933    6112 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:07.939171    6112 cli_runner.go:164] Run: docker container inspect false-20220516225309-2444 --format={{.State.Status}}
	W0516 23:02:09.046820    6112 cli_runner.go:211] docker container inspect false-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:09.046820    6112 cli_runner.go:217] Completed: docker container inspect false-20220516225309-2444 --format={{.State.Status}}: (1.107502s)
	I0516 23:02:09.046820    6112 delete.go:82] Unable to get host status for false-20220516225309-2444, assuming it has already been deleted: state: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:09.054241    6112 cli_runner.go:164] Run: docker container inspect -f {{.Id}} false-20220516225309-2444
	W0516 23:02:10.148403    6112 cli_runner.go:211] docker container inspect -f {{.Id}} false-20220516225309-2444 returned with exit code 1
	I0516 23:02:10.148403    6112 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} false-20220516225309-2444: (1.0941524s)
	I0516 23:02:10.148403    6112 kic.go:356] could not find the container false-20220516225309-2444 to remove it. will try anyways
	I0516 23:02:10.159574    6112 cli_runner.go:164] Run: docker container inspect false-20220516225309-2444 --format={{.State.Status}}
	W0516 23:02:11.234239    6112 cli_runner.go:211] docker container inspect false-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:11.234239    6112 cli_runner.go:217] Completed: docker container inspect false-20220516225309-2444 --format={{.State.Status}}: (1.0746559s)
	W0516 23:02:11.234239    6112 oci.go:84] error getting container status, will try to delete anyways: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:11.242243    6112 cli_runner.go:164] Run: docker exec --privileged -t false-20220516225309-2444 /bin/bash -c "sudo init 0"
	W0516 23:02:12.326787    6112 cli_runner.go:211] docker exec --privileged -t false-20220516225309-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 23:02:12.326787    6112 cli_runner.go:217] Completed: docker exec --privileged -t false-20220516225309-2444 /bin/bash -c "sudo init 0": (1.084534s)
	I0516 23:02:12.326787    6112 oci.go:641] error shutdown false-20220516225309-2444: docker exec --privileged -t false-20220516225309-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:13.345755    6112 cli_runner.go:164] Run: docker container inspect false-20220516225309-2444 --format={{.State.Status}}
	W0516 23:02:14.423502    6112 cli_runner.go:211] docker container inspect false-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:14.423502    6112 cli_runner.go:217] Completed: docker container inspect false-20220516225309-2444 --format={{.State.Status}}: (1.0769887s)
	I0516 23:02:14.423502    6112 oci.go:653] temporary error verifying shutdown: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:14.423502    6112 oci.go:655] temporary error: container false-20220516225309-2444 status is  but expect it to be exited
	I0516 23:02:14.423502    6112 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:14.894509    6112 cli_runner.go:164] Run: docker container inspect false-20220516225309-2444 --format={{.State.Status}}
	W0516 23:02:15.989967    6112 cli_runner.go:211] docker container inspect false-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:15.990091    6112 cli_runner.go:217] Completed: docker container inspect false-20220516225309-2444 --format={{.State.Status}}: (1.0952652s)
	I0516 23:02:15.990091    6112 oci.go:653] temporary error verifying shutdown: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:15.990091    6112 oci.go:655] temporary error: container false-20220516225309-2444 status is  but expect it to be exited
	I0516 23:02:15.990091    6112 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:16.904544    6112 cli_runner.go:164] Run: docker container inspect false-20220516225309-2444 --format={{.State.Status}}
	W0516 23:02:18.061992    6112 cli_runner.go:211] docker container inspect false-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:18.061992    6112 cli_runner.go:217] Completed: docker container inspect false-20220516225309-2444 --format={{.State.Status}}: (1.1574382s)
	I0516 23:02:18.061992    6112 oci.go:653] temporary error verifying shutdown: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:18.061992    6112 oci.go:655] temporary error: container false-20220516225309-2444 status is  but expect it to be exited
	I0516 23:02:18.061992    6112 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:18.715279    6112 cli_runner.go:164] Run: docker container inspect false-20220516225309-2444 --format={{.State.Status}}
	W0516 23:02:19.777998    6112 cli_runner.go:211] docker container inspect false-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:19.777998    6112 cli_runner.go:217] Completed: docker container inspect false-20220516225309-2444 --format={{.State.Status}}: (1.0627104s)
	I0516 23:02:19.777998    6112 oci.go:653] temporary error verifying shutdown: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:19.777998    6112 oci.go:655] temporary error: container false-20220516225309-2444 status is  but expect it to be exited
	I0516 23:02:19.777998    6112 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:20.903462    6112 cli_runner.go:164] Run: docker container inspect false-20220516225309-2444 --format={{.State.Status}}
	W0516 23:02:21.979517    6112 cli_runner.go:211] docker container inspect false-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:21.979517    6112 cli_runner.go:217] Completed: docker container inspect false-20220516225309-2444 --format={{.State.Status}}: (1.0760463s)
	I0516 23:02:21.979517    6112 oci.go:653] temporary error verifying shutdown: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:21.979517    6112 oci.go:655] temporary error: container false-20220516225309-2444 status is  but expect it to be exited
	I0516 23:02:21.979517    6112 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:23.511278    6112 cli_runner.go:164] Run: docker container inspect false-20220516225309-2444 --format={{.State.Status}}
	W0516 23:02:24.618439    6112 cli_runner.go:211] docker container inspect false-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:24.618439    6112 cli_runner.go:217] Completed: docker container inspect false-20220516225309-2444 --format={{.State.Status}}: (1.1071509s)
	I0516 23:02:24.618439    6112 oci.go:653] temporary error verifying shutdown: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:24.618439    6112 oci.go:655] temporary error: container false-20220516225309-2444 status is  but expect it to be exited
	I0516 23:02:24.618439    6112 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:27.668648    6112 cli_runner.go:164] Run: docker container inspect false-20220516225309-2444 --format={{.State.Status}}
	W0516 23:02:28.755107    6112 cli_runner.go:211] docker container inspect false-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:28.755155    6112 cli_runner.go:217] Completed: docker container inspect false-20220516225309-2444 --format={{.State.Status}}: (1.0863949s)
	I0516 23:02:28.755411    6112 oci.go:653] temporary error verifying shutdown: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:28.755411    6112 oci.go:655] temporary error: container false-20220516225309-2444 status is  but expect it to be exited
	I0516 23:02:28.755411    6112 oci.go:88] couldn't shut down false-20220516225309-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "false-20220516225309-2444": docker container inspect false-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	 
	I0516 23:02:28.764368    6112 cli_runner.go:164] Run: docker rm -f -v false-20220516225309-2444
	I0516 23:02:29.835182    6112 cli_runner.go:217] Completed: docker rm -f -v false-20220516225309-2444: (1.0707361s)
	I0516 23:02:29.842171    6112 cli_runner.go:164] Run: docker container inspect -f {{.Id}} false-20220516225309-2444
	W0516 23:02:30.923238    6112 cli_runner.go:211] docker container inspect -f {{.Id}} false-20220516225309-2444 returned with exit code 1
	I0516 23:02:30.923238    6112 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} false-20220516225309-2444: (1.0809167s)
	I0516 23:02:30.931762    6112 cli_runner.go:164] Run: docker network inspect false-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:02:32.012759    6112 cli_runner.go:211] docker network inspect false-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:02:32.012759    6112 cli_runner.go:217] Completed: docker network inspect false-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.080988s)
	I0516 23:02:32.019752    6112 network_create.go:272] running [docker network inspect false-20220516225309-2444] to gather additional debugging logs...
	I0516 23:02:32.019752    6112 cli_runner.go:164] Run: docker network inspect false-20220516225309-2444
	W0516 23:02:33.148296    6112 cli_runner.go:211] docker network inspect false-20220516225309-2444 returned with exit code 1
	I0516 23:02:33.148296    6112 cli_runner.go:217] Completed: docker network inspect false-20220516225309-2444: (1.1285347s)
	I0516 23:02:33.148296    6112 network_create.go:275] error running [docker network inspect false-20220516225309-2444]: docker network inspect false-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220516225309-2444
	I0516 23:02:33.148296    6112 network_create.go:277] output of [docker network inspect false-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220516225309-2444
	
	** /stderr **
	W0516 23:02:33.149303    6112 delete.go:139] delete failed (probably ok) <nil>
	I0516 23:02:33.149303    6112 fix.go:115] Sleeping 1 second for extra luck!
	I0516 23:02:34.150027    6112 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:02:34.155383    6112 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:02:34.155383    6112 start.go:165] libmachine.API.Create for "false-20220516225309-2444" (driver="docker")
	I0516 23:02:34.155383    6112 client.go:168] LocalClient.Create starting
	I0516 23:02:34.156161    6112 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:02:34.156708    6112 main.go:134] libmachine: Decoding PEM data...
	I0516 23:02:34.156780    6112 main.go:134] libmachine: Parsing certificate...
	I0516 23:02:34.156919    6112 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:02:34.156919    6112 main.go:134] libmachine: Decoding PEM data...
	I0516 23:02:34.156919    6112 main.go:134] libmachine: Parsing certificate...
	I0516 23:02:34.165047    6112 cli_runner.go:164] Run: docker network inspect false-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:02:35.280364    6112 cli_runner.go:211] docker network inspect false-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:02:35.280364    6112 cli_runner.go:217] Completed: docker network inspect false-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1153073s)
	I0516 23:02:35.288362    6112 network_create.go:272] running [docker network inspect false-20220516225309-2444] to gather additional debugging logs...
	I0516 23:02:35.288362    6112 cli_runner.go:164] Run: docker network inspect false-20220516225309-2444
	W0516 23:02:36.423702    6112 cli_runner.go:211] docker network inspect false-20220516225309-2444 returned with exit code 1
	I0516 23:02:36.423783    6112 cli_runner.go:217] Completed: docker network inspect false-20220516225309-2444: (1.1353309s)
	I0516 23:02:36.423867    6112 network_create.go:275] error running [docker network inspect false-20220516225309-2444]: docker network inspect false-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220516225309-2444
	I0516 23:02:36.423867    6112 network_create.go:277] output of [docker network inspect false-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220516225309-2444
	
	** /stderr **
	I0516 23:02:36.433641    6112 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:02:37.544247    6112 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1105504s)
	I0516 23:02:37.561817    6112 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010a378] amended:true}} dirty:map[192.168.49.0:0xc00010a378 192.168.58.0:0xc0005b77f0 192.168.67.0:0xc00065a178 192.168.76.0:0xc0005b7888] misses:2}
	I0516 23:02:37.561817    6112 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:02:37.578533    6112 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010a378] amended:true}} dirty:map[192.168.49.0:0xc00010a378 192.168.58.0:0xc0005b77f0 192.168.67.0:0xc00065a178 192.168.76.0:0xc0005b7888] misses:3}
	I0516 23:02:37.578610    6112 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:02:37.593016    6112 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010a378 192.168.58.0:0xc0005b77f0 192.168.67.0:0xc00065a178 192.168.76.0:0xc0005b7888] amended:false}} dirty:map[] misses:0}
	I0516 23:02:37.593016    6112 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:02:37.609320    6112 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010a378 192.168.58.0:0xc0005b77f0 192.168.67.0:0xc00065a178 192.168.76.0:0xc0005b7888] amended:false}} dirty:map[] misses:0}
	I0516 23:02:37.609320    6112 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:02:37.626893    6112 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010a378 192.168.58.0:0xc0005b77f0 192.168.67.0:0xc00065a178 192.168.76.0:0xc0005b7888] amended:true}} dirty:map[192.168.49.0:0xc00010a378 192.168.58.0:0xc0005b77f0 192.168.67.0:0xc00065a178 192.168.76.0:0xc0005b7888 192.168.85.0:0xc00010a588] misses:0}
	I0516 23:02:37.626893    6112 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:02:37.626893    6112 network_create.go:115] attempt to create docker network false-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 23:02:37.635127    6112 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444
	W0516 23:02:38.796540    6112 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444 returned with exit code 1
	I0516 23:02:38.796540    6112 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444: (1.1614042s)
	E0516 23:02:38.796540    6112 network_create.go:104] error while trying to create docker network false-20220516225309-2444 192.168.85.0/24: create docker network false-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 778e2ee7275a7cf2a6fd270d02d2815d8328a6702d046de76683019360ab2847 (br-778e2ee7275a): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 23:02:38.796540    6112 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network false-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 778e2ee7275a7cf2a6fd270d02d2815d8328a6702d046de76683019360ab2847 (br-778e2ee7275a): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network false-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 778e2ee7275a7cf2a6fd270d02d2815d8328a6702d046de76683019360ab2847 (br-778e2ee7275a): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 23:02:38.815197    6112 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:02:39.863670    6112 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.048465s)
	I0516 23:02:39.875478    6112 cli_runner.go:164] Run: docker volume create false-20220516225309-2444 --label name.minikube.sigs.k8s.io=false-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:02:40.953658    6112 cli_runner.go:211] docker volume create false-20220516225309-2444 --label name.minikube.sigs.k8s.io=false-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:02:40.953658    6112 cli_runner.go:217] Completed: docker volume create false-20220516225309-2444 --label name.minikube.sigs.k8s.io=false-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0781726s)
	I0516 23:02:40.953658    6112 client.go:171] LocalClient.Create took 6.7982207s
	I0516 23:02:42.967895    6112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:02:42.974322    6112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444
	W0516 23:02:44.020630    6112 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444 returned with exit code 1
	I0516 23:02:44.020676    6112 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: (1.0462637s)
	I0516 23:02:44.020753    6112 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:44.363225    6112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444
	W0516 23:02:45.459542    6112 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444 returned with exit code 1
	I0516 23:02:45.459542    6112 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: (1.0963095s)
	W0516 23:02:45.459542    6112 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	
	W0516 23:02:45.459542    6112 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:45.473379    6112 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:02:45.480816    6112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444
	W0516 23:02:46.568822    6112 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444 returned with exit code 1
	I0516 23:02:46.568822    6112 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: (1.087999s)
	I0516 23:02:46.568822    6112 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:46.798354    6112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444
	W0516 23:02:47.894365    6112 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444 returned with exit code 1
	I0516 23:02:47.894365    6112 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: (1.0960041s)
	W0516 23:02:47.894365    6112 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	
	W0516 23:02:47.894365    6112 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:47.894365    6112 start.go:134] duration metric: createHost completed in 13.7439784s
	I0516 23:02:47.907374    6112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:02:47.918374    6112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444
	W0516 23:02:48.946855    6112 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444 returned with exit code 1
	I0516 23:02:48.946855    6112 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: (1.0284284s)
	I0516 23:02:48.946855    6112 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:49.210004    6112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444
	W0516 23:02:50.307391    6112 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444 returned with exit code 1
	I0516 23:02:50.307391    6112 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: (1.0973796s)
	W0516 23:02:50.307391    6112 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	
	W0516 23:02:50.307391    6112 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:50.318389    6112 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:02:50.326380    6112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444
	W0516 23:02:51.427864    6112 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444 returned with exit code 1
	I0516 23:02:51.427864    6112 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: (1.1014764s)
	I0516 23:02:51.427864    6112 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:51.641482    6112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444
	W0516 23:02:52.750059    6112 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444 returned with exit code 1
	I0516 23:02:52.750059    6112 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: (1.1085698s)
	W0516 23:02:52.750059    6112 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	
	W0516 23:02:52.750059    6112 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220516225309-2444
	I0516 23:02:52.750059    6112 fix.go:57] fixHost completed within 47.1078319s
	I0516 23:02:52.750059    6112 start.go:81] releasing machines lock for "false-20220516225309-2444", held for 47.1078319s
	W0516 23:02:52.750059    6112 out.go:239] * Failed to start docker container. Running "minikube delete -p false-20220516225309-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for false-20220516225309-2444 container: docker volume create false-20220516225309-2444 --label name.minikube.sigs.k8s.io=false-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/false-20220516225309-2444': mkdir /var/lib/docker/volumes/false-20220516225309-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p false-20220516225309-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for false-20220516225309-2444 container: docker volume create false-20220516225309-2444 --label name.minikube.sigs.k8s.io=false-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/false-20220516225309-2444': mkdir /var/lib/docker/volumes/false-20220516225309-2444: read-only file system
	
	I0516 23:02:52.757072    6112 out.go:177] 
	W0516 23:02:52.760067    6112 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for false-20220516225309-2444 container: docker volume create false-20220516225309-2444 --label name.minikube.sigs.k8s.io=false-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/false-20220516225309-2444': mkdir /var/lib/docker/volumes/false-20220516225309-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for false-20220516225309-2444 container: docker volume create false-20220516225309-2444 --label name.minikube.sigs.k8s.io=false-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/false-20220516225309-2444': mkdir /var/lib/docker/volumes/false-20220516225309-2444: read-only file system
	
	W0516 23:02:52.760067    6112 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 23:02:52.760067    6112 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 23:02:52.765066    6112 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/false/Start (81.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220516230045-2444 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220516230045-2444 create -f testdata\busybox.yaml: exit status 1 (267.9422ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220516230045-2444" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context default-k8s-different-port-20220516230045-2444 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220516230045-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220516230045-2444: exit status 1 (1.1373462s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444: exit status 7 (2.8919569s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:02:15.879537    9128 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220516230045-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220516230045-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220516230045-2444: exit status 1 (1.2001253s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444: exit status 7 (3.0005033s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:02:20.076165    6964 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220516230045-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220516230045-2444 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220516230045-2444 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.001378s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220516230045-2444 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220516230045-2444 describe deploy/metrics-server -n kube-system: exit status 1 (233.5147ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220516230045-2444" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-different-port-20220516230045-2444 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220516230045-2444

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220516230045-2444: exit status 1 (1.1372242s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444: exit status 7 (2.993064s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:02:27.472989    4544 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220516230045-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (7.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (27.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220516230045-2444 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220516230045-2444 --alsologtostderr -v=3: exit status 82 (22.8662964s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-different-port-20220516230045-2444"  ...
	* Stopping node "default-k8s-different-port-20220516230045-2444"  ...
	* Stopping node "default-k8s-different-port-20220516230045-2444"  ...
	* Stopping node "default-k8s-different-port-20220516230045-2444"  ...
	* Stopping node "default-k8s-different-port-20220516230045-2444"  ...
	* Stopping node "default-k8s-different-port-20220516230045-2444"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:02:27.731202    7732 out.go:296] Setting OutFile to fd 1724 ...
	I0516 23:02:27.794206    7732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:02:27.794206    7732 out.go:309] Setting ErrFile to fd 1908...
	I0516 23:02:27.794827    7732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:02:27.806111    7732 out.go:303] Setting JSON to false
	I0516 23:02:27.806738    7732 daemonize_windows.go:44] trying to kill existing schedule stop for profile default-k8s-different-port-20220516230045-2444...
	I0516 23:02:27.818105    7732 ssh_runner.go:195] Run: systemctl --version
	I0516 23:02:27.825776    7732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:02:30.403968    7732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:02:30.403968    7732 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (2.5781698s)
	I0516 23:02:30.415751    7732 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0516 23:02:30.424147    7732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:02:31.554563    7732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:02:31.554563    7732 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.1304062s)
	I0516 23:02:31.554563    7732 retry.go:31] will retry after 360.127272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:31.925289    7732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:02:33.020789    7732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:02:33.020789    7732 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.0954908s)
	I0516 23:02:33.020789    7732 openrc.go:165] stop output: 
	E0516 23:02:33.020789    7732 daemonize_windows.go:38] error terminating scheduled stop for profile default-k8s-different-port-20220516230045-2444: stopping schedule-stop service for profile default-k8s-different-port-20220516230045-2444: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:33.020789    7732 mustload.go:65] Loading cluster: default-k8s-different-port-20220516230045-2444
	I0516 23:02:33.021764    7732 config.go:178] Loaded profile config "default-k8s-different-port-20220516230045-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:02:33.021764    7732 stop.go:39] StopHost: default-k8s-different-port-20220516230045-2444
	I0516 23:02:33.024775    7732 out.go:177] * Stopping node "default-k8s-different-port-20220516230045-2444"  ...
	I0516 23:02:33.044764    7732 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:02:34.118262    7732 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:34.118262    7732 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0734655s)
	W0516 23:02:34.118262    7732 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	W0516 23:02:34.118262    7732 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:34.118262    7732 retry.go:31] will retry after 937.714187ms: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:35.063122    7732 stop.go:39] StopHost: default-k8s-different-port-20220516230045-2444
	I0516 23:02:35.071173    7732 out.go:177] * Stopping node "default-k8s-different-port-20220516230045-2444"  ...
	I0516 23:02:35.093941    7732 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:02:36.218918    7732 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:36.218918    7732 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.1249671s)
	W0516 23:02:36.218918    7732 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	W0516 23:02:36.218918    7732 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:36.218918    7732 retry.go:31] will retry after 1.386956246s: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:37.621698    7732 stop.go:39] StopHost: default-k8s-different-port-20220516230045-2444
	I0516 23:02:37.628039    7732 out.go:177] * Stopping node "default-k8s-different-port-20220516230045-2444"  ...
	I0516 23:02:37.645375    7732 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:02:38.781325    7732 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:38.781325    7732 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.1359412s)
	W0516 23:02:38.781325    7732 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	W0516 23:02:38.781325    7732 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:38.781325    7732 retry.go:31] will retry after 2.670351914s: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:41.461098    7732 stop.go:39] StopHost: default-k8s-different-port-20220516230045-2444
	I0516 23:02:41.467096    7732 out.go:177] * Stopping node "default-k8s-different-port-20220516230045-2444"  ...
	I0516 23:02:41.483108    7732 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:02:42.580166    7732 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:42.580217    7732 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0968012s)
	W0516 23:02:42.580279    7732 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	W0516 23:02:42.580332    7732 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:42.580332    7732 retry.go:31] will retry after 1.909024939s: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:44.495524    7732 stop.go:39] StopHost: default-k8s-different-port-20220516230045-2444
	I0516 23:02:44.499703    7732 out.go:177] * Stopping node "default-k8s-different-port-20220516230045-2444"  ...
	I0516 23:02:44.516759    7732 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:02:45.614006    7732 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:45.614076    7732 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0970577s)
	W0516 23:02:45.614184    7732 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	W0516 23:02:45.614264    7732 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:45.614264    7732 retry.go:31] will retry after 3.323628727s: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:48.946855    7732 stop.go:39] StopHost: default-k8s-different-port-20220516230045-2444
	I0516 23:02:48.951552    7732 out.go:177] * Stopping node "default-k8s-different-port-20220516230045-2444"  ...
	I0516 23:02:48.970482    7732 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:02:50.040640    7732 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:50.040668    7732 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0699413s)
	W0516 23:02:50.040668    7732 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	W0516 23:02:50.040668    7732 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:02:50.046370    7732 out.go:177] 
	W0516 23:02:50.048950    7732 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect default-k8s-different-port-20220516230045-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect default-k8s-different-port-20220516230045-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	W0516 23:02:50.048950    7732 out.go:239] * 
	* 
	W0516 23:02:50.316391    7732 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_39.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_39.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0516 23:02:50.320380    7732 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220516230045-2444 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220516230045-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220516230045-2444: exit status 1 (1.1253475s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444: exit status 7 (3.1156708s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:02:54.576890    8988 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220516230045-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Stop (27.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (27.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-20220516230100-2444 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p newest-cni-20220516230100-2444 --alsologtostderr -v=3: exit status 82 (22.8685817s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-20220516230100-2444"  ...
	* Stopping node "newest-cni-20220516230100-2444"  ...
	* Stopping node "newest-cni-20220516230100-2444"  ...
	* Stopping node "newest-cni-20220516230100-2444"  ...
	* Stopping node "newest-cni-20220516230100-2444"  ...
	* Stopping node "newest-cni-20220516230100-2444"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:02:30.025005    2892 out.go:296] Setting OutFile to fd 1896 ...
	I0516 23:02:30.097449    2892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:02:30.097449    2892 out.go:309] Setting ErrFile to fd 1844...
	I0516 23:02:30.097449    2892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:02:30.108884    2892 out.go:303] Setting JSON to false
	I0516 23:02:30.108938    2892 daemonize_windows.go:44] trying to kill existing schedule stop for profile newest-cni-20220516230100-2444...
	I0516 23:02:30.120189    2892 ssh_runner.go:195] Run: systemctl --version
	I0516 23:02:30.128209    2892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:02:32.755487    2892 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:02:32.755517    2892 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (2.627214s)
	I0516 23:02:32.767026    2892 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0516 23:02:32.783722    2892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:02:33.910452    2892 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:02:33.910452    2892 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1267206s)
	I0516 23:02:33.910452    2892 retry.go:31] will retry after 360.127272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:34.285266    2892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:02:35.374144    2892 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:02:35.374144    2892 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.0887065s)
	I0516 23:02:35.374144    2892 openrc.go:165] stop output: 
	E0516 23:02:35.374144    2892 daemonize_windows.go:38] error terminating scheduled stop for profile newest-cni-20220516230100-2444: stopping schedule-stop service for profile newest-cni-20220516230100-2444: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:35.374144    2892 mustload.go:65] Loading cluster: newest-cni-20220516230100-2444
	I0516 23:02:35.374883    2892 config.go:178] Loaded profile config "newest-cni-20220516230100-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:02:35.375511    2892 stop.go:39] StopHost: newest-cni-20220516230100-2444
	I0516 23:02:35.380341    2892 out.go:177] * Stopping node "newest-cni-20220516230100-2444"  ...
	I0516 23:02:35.401355    2892 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:02:36.501957    2892 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:36.501957    2892 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.1005919s)
	W0516 23:02:36.501957    2892 stop.go:75] unable to get state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	W0516 23:02:36.501957    2892 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:36.501957    2892 retry.go:31] will retry after 937.714187ms: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:37.451933    2892 stop.go:39] StopHost: newest-cni-20220516230100-2444
	I0516 23:02:37.457851    2892 out.go:177] * Stopping node "newest-cni-20220516230100-2444"  ...
	I0516 23:02:37.486536    2892 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:02:38.592930    2892 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:38.592930    2892 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.1063852s)
	W0516 23:02:38.592930    2892 stop.go:75] unable to get state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	W0516 23:02:38.592930    2892 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:38.592930    2892 retry.go:31] will retry after 1.386956246s: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:39.988617    2892 stop.go:39] StopHost: newest-cni-20220516230100-2444
	I0516 23:02:39.992017    2892 out.go:177] * Stopping node "newest-cni-20220516230100-2444"  ...
	I0516 23:02:40.009777    2892 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:02:41.079113    2892 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:41.079201    2892 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0691327s)
	W0516 23:02:41.079201    2892 stop.go:75] unable to get state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	W0516 23:02:41.079201    2892 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:41.079201    2892 retry.go:31] will retry after 2.670351914s: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:43.755002    2892 stop.go:39] StopHost: newest-cni-20220516230100-2444
	I0516 23:02:43.759266    2892 out.go:177] * Stopping node "newest-cni-20220516230100-2444"  ...
	I0516 23:02:43.779429    2892 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:02:44.904607    2892 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:44.904607    2892 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.125139s)
	W0516 23:02:44.904607    2892 stop.go:75] unable to get state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	W0516 23:02:44.904607    2892 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:44.904607    2892 retry.go:31] will retry after 1.909024939s: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:46.820615    2892 stop.go:39] StopHost: newest-cni-20220516230100-2444
	I0516 23:02:46.831613    2892 out.go:177] * Stopping node "newest-cni-20220516230100-2444"  ...
	I0516 23:02:46.851689    2892 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:02:47.910373    2892 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:47.910373    2892 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0586764s)
	W0516 23:02:47.910373    2892 stop.go:75] unable to get state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	W0516 23:02:47.910373    2892 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:47.910373    2892 retry.go:31] will retry after 3.323628727s: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:51.242735    2892 stop.go:39] StopHost: newest-cni-20220516230100-2444
	I0516 23:02:51.247986    2892 out.go:177] * Stopping node "newest-cni-20220516230100-2444"  ...
	I0516 23:02:51.268069    2892 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:02:52.373179    2892 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:02:52.373273    2892 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.1049882s)
	W0516 23:02:52.373273    2892 stop.go:75] unable to get state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	W0516 23:02:52.373273    2892 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:02:52.376483    2892 out.go:177] 
	W0516 23:02:52.378783    2892 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect newest-cni-20220516230100-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect newest-cni-20220516230100-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	W0516 23:02:52.378855    2892 out.go:239] * 
	* 
	W0516 23:02:52.617236    2892 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_39.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_39.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0516 23:02:52.620320    2892 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p newest-cni-20220516230100-2444 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220516230100-2444

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220516230100-2444: exit status 1 (1.2021863s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220516230100-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220516230100-2444 -n newest-cni-20220516230100-2444

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220516230100-2444 -n newest-cni-20220516230100-2444: exit status 7 (2.9751601s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:02:56.814211    8008 status.go:247] status error: host: state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220516230100-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (27.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (81.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-20220516225309-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-20220516225309-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: exit status 60 (1m21.6726761s)

                                                
                                                
-- stdout --
	* [cilium-20220516225309-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node cilium-20220516225309-2444 in cluster cilium-20220516225309-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cilium-20220516225309-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:02:46.468913    5264 out.go:296] Setting OutFile to fd 1664 ...
	I0516 23:02:46.533915    5264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:02:46.533915    5264 out.go:309] Setting ErrFile to fd 2036...
	I0516 23:02:46.533915    5264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:02:46.548165    5264 out.go:303] Setting JSON to false
	I0516 23:02:46.550596    5264 start.go:115] hostinfo: {"hostname":"minikube2","uptime":5278,"bootTime":1652736888,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 23:02:46.550738    5264 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 23:02:46.556815    5264 out.go:177] * [cilium-20220516225309-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 23:02:46.560850    5264 notify.go:193] Checking for updates...
	I0516 23:02:46.563867    5264 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 23:02:46.565857    5264 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 23:02:46.570866    5264 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 23:02:46.572862    5264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 23:02:46.575881    5264 config.go:178] Loaded profile config "default-k8s-different-port-20220516230045-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:02:46.576826    5264 config.go:178] Loaded profile config "false-20220516225309-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:02:46.576826    5264 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:02:46.576826    5264 config.go:178] Loaded profile config "newest-cni-20220516230100-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:02:46.577864    5264 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 23:02:49.247203    5264 docker.go:137] docker version: linux-20.10.14
	I0516 23:02:49.256822    5264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:02:51.335315    5264 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0784238s)
	I0516 23:02:51.335392    5264 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:02:50.2835026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:02:51.341835    5264 out.go:177] * Using the docker driver based on user configuration
	I0516 23:02:51.344822    5264 start.go:284] selected driver: docker
	I0516 23:02:51.344822    5264 start.go:806] validating driver "docker" against <nil>
	I0516 23:02:51.344822    5264 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 23:02:51.442901    5264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:02:53.598592    5264 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1556754s)
	I0516 23:02:53.599031    5264 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:02:52.5223145 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:02:53.599031    5264 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 23:02:53.599920    5264 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 23:02:53.606333    5264 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 23:02:53.608145    5264 cni.go:95] Creating CNI manager for "cilium"
	I0516 23:02:53.608145    5264 start_flags.go:301] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0516 23:02:53.608145    5264 start_flags.go:306] config:
	{Name:cilium-20220516225309-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220516225309-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 23:02:53.612143    5264 out.go:177] * Starting control plane node cilium-20220516225309-2444 in cluster cilium-20220516225309-2444
	I0516 23:02:53.614150    5264 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 23:02:53.617140    5264 out.go:177] * Pulling base image ...
	I0516 23:02:53.619140    5264 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 23:02:53.619140    5264 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 23:02:53.619140    5264 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 23:02:53.619140    5264 cache.go:57] Caching tarball of preloaded images
	I0516 23:02:53.620098    5264 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 23:02:53.620098    5264 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 23:02:53.621094    5264 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-20220516225309-2444\config.json ...
	I0516 23:02:53.621094    5264 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-20220516225309-2444\config.json: {Name:mkfa5c5d0678c42a18ff865200376b8eab8c52bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 23:02:54.761616    5264 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 23:02:54.761616    5264 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:02:54.761616    5264 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:02:54.761616    5264 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 23:02:54.761616    5264 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 23:02:54.761616    5264 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 23:02:54.761616    5264 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 23:02:54.762625    5264 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 23:02:54.762625    5264 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:02:57.146457    5264 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 23:02:57.146457    5264 cache.go:206] Successfully downloaded all kic artifacts
	I0516 23:02:57.146457    5264 start.go:352] acquiring machines lock for cilium-20220516225309-2444: {Name:mk41808534544680dd00277d74c72f2cb58e2b20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:02:57.146457    5264 start.go:356] acquired machines lock for "cilium-20220516225309-2444" in 0s
	I0516 23:02:57.146457    5264 start.go:91] Provisioning new machine with config: &{Name:cilium-20220516225309-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220516225309-2444 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 23:02:57.146457    5264 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:02:57.153487    5264 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:02:57.153487    5264 start.go:165] libmachine.API.Create for "cilium-20220516225309-2444" (driver="docker")
	I0516 23:02:57.153487    5264 client.go:168] LocalClient.Create starting
	I0516 23:02:57.154505    5264 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:02:57.154505    5264 main.go:134] libmachine: Decoding PEM data...
	I0516 23:02:57.154505    5264 main.go:134] libmachine: Parsing certificate...
	I0516 23:02:57.154505    5264 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:02:57.154505    5264 main.go:134] libmachine: Decoding PEM data...
	I0516 23:02:57.154505    5264 main.go:134] libmachine: Parsing certificate...
	I0516 23:02:57.163207    5264 cli_runner.go:164] Run: docker network inspect cilium-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:02:58.274110    5264 cli_runner.go:211] docker network inspect cilium-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:02:58.274110    5264 cli_runner.go:217] Completed: docker network inspect cilium-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1108931s)
	I0516 23:02:58.283704    5264 network_create.go:272] running [docker network inspect cilium-20220516225309-2444] to gather additional debugging logs...
	I0516 23:02:58.283704    5264 cli_runner.go:164] Run: docker network inspect cilium-20220516225309-2444
	W0516 23:02:59.391840    5264 cli_runner.go:211] docker network inspect cilium-20220516225309-2444 returned with exit code 1
	I0516 23:02:59.391889    5264 cli_runner.go:217] Completed: docker network inspect cilium-20220516225309-2444: (1.1079039s)
	I0516 23:02:59.391954    5264 network_create.go:275] error running [docker network inspect cilium-20220516225309-2444]: docker network inspect cilium-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220516225309-2444
	I0516 23:02:59.392001    5264 network_create.go:277] output of [docker network inspect cilium-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220516225309-2444
	
	** /stderr **
	I0516 23:02:59.401093    5264 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:03:00.534134    5264 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1329651s)
	I0516 23:03:00.556430    5264 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000114ab8] misses:0}
	I0516 23:03:00.556430    5264 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:00.556430    5264 network_create.go:115] attempt to create docker network cilium-20220516225309-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 23:03:00.563475    5264 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444
	W0516 23:03:01.686056    5264 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444 returned with exit code 1
	I0516 23:03:01.686056    5264 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444: (1.1225718s)
	W0516 23:03:01.686056    5264 network_create.go:107] failed to create docker network cilium-20220516225309-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 23:03:01.706677    5264 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000114ab8] amended:false}} dirty:map[] misses:0}
	I0516 23:03:01.706677    5264 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:01.725594    5264 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000114ab8] amended:true}} dirty:map[192.168.49.0:0xc000114ab8 192.168.58.0:0xc0000063e8] misses:0}
	I0516 23:03:01.725737    5264 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:01.725790    5264 network_create.go:115] attempt to create docker network cilium-20220516225309-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 23:03:01.739337    5264 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444
	W0516 23:03:02.871027    5264 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444 returned with exit code 1
	I0516 23:03:02.871027    5264 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444: (1.1316797s)
	W0516 23:03:02.871027    5264 network_create.go:107] failed to create docker network cilium-20220516225309-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 23:03:02.892028    5264 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000114ab8] amended:true}} dirty:map[192.168.49.0:0xc000114ab8 192.168.58.0:0xc0000063e8] misses:1}
	I0516 23:03:02.892028    5264 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:02.912047    5264 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000114ab8] amended:true}} dirty:map[192.168.49.0:0xc000114ab8 192.168.58.0:0xc0000063e8 192.168.67.0:0xc000646558] misses:1}
	I0516 23:03:02.913029    5264 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:02.913029    5264 network_create.go:115] attempt to create docker network cilium-20220516225309-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 23:03:02.922349    5264 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444
	W0516 23:03:04.083931    5264 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444 returned with exit code 1
	I0516 23:03:04.084131    5264 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444: (1.1614054s)
	W0516 23:03:04.084217    5264 network_create.go:107] failed to create docker network cilium-20220516225309-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 23:03:04.106314    5264 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000114ab8] amended:true}} dirty:map[192.168.49.0:0xc000114ab8 192.168.58.0:0xc0000063e8 192.168.67.0:0xc000646558] misses:2}
	I0516 23:03:04.106314    5264 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:04.125316    5264 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000114ab8] amended:true}} dirty:map[192.168.49.0:0xc000114ab8 192.168.58.0:0xc0000063e8 192.168.67.0:0xc000646558 192.168.76.0:0xc000114bf0] misses:2}
	I0516 23:03:04.125316    5264 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:04.125316    5264 network_create.go:115] attempt to create docker network cilium-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 23:03:04.132316    5264 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444
	W0516 23:03:05.267126    5264 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444 returned with exit code 1
	I0516 23:03:05.267126    5264 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444: (1.1347998s)
	E0516 23:03:05.267126    5264 network_create.go:104] error while trying to create docker network cilium-20220516225309-2444 192.168.76.0/24: create docker network cilium-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dd7ae88c47384fa3c5b8aa4b7736b50eddb6fbb25892becbd870e83e002b31d9 (br-dd7ae88c4738): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 23:03:05.267126    5264 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cilium-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dd7ae88c47384fa3c5b8aa4b7736b50eddb6fbb25892becbd870e83e002b31d9 (br-dd7ae88c4738): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cilium-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dd7ae88c47384fa3c5b8aa4b7736b50eddb6fbb25892becbd870e83e002b31d9 (br-dd7ae88c4738): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 23:03:05.285098    5264 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:03:06.340123    5264 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0549602s)
	I0516 23:03:06.349721    5264 cli_runner.go:164] Run: docker volume create cilium-20220516225309-2444 --label name.minikube.sigs.k8s.io=cilium-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:03:07.456857    5264 cli_runner.go:211] docker volume create cilium-20220516225309-2444 --label name.minikube.sigs.k8s.io=cilium-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:03:07.456912    5264 cli_runner.go:217] Completed: docker volume create cilium-20220516225309-2444 --label name.minikube.sigs.k8s.io=cilium-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: (1.107063s)
	I0516 23:03:07.456970    5264 client.go:171] LocalClient.Create took 10.3033935s
	I0516 23:03:09.481696    5264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:03:09.488722    5264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444
	W0516 23:03:10.605434    5264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444 returned with exit code 1
	I0516 23:03:10.605460    5264 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: (1.1166562s)
	I0516 23:03:10.605460    5264 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:10.901120    5264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444
	W0516 23:03:11.985713    5264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444 returned with exit code 1
	I0516 23:03:11.985749    5264 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: (1.0844711s)
	W0516 23:03:11.985876    5264 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	
	W0516 23:03:11.985876    5264 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:11.996959    5264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:03:12.004665    5264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444
	W0516 23:03:13.124966    5264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444 returned with exit code 1
	I0516 23:03:13.125021    5264 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: (1.1202461s)
	I0516 23:03:13.125257    5264 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:13.432133    5264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444
	W0516 23:03:14.616215    5264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444 returned with exit code 1
	I0516 23:03:14.616215    5264 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: (1.1840032s)
	W0516 23:03:14.616215    5264 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	
	W0516 23:03:14.616215    5264 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:14.616215    5264 start.go:134] duration metric: createHost completed in 17.4696071s
	I0516 23:03:14.616215    5264 start.go:81] releasing machines lock for "cilium-20220516225309-2444", held for 17.4696071s
	W0516 23:03:14.616215    5264 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for cilium-20220516225309-2444 container: docker volume create cilium-20220516225309-2444 --label name.minikube.sigs.k8s.io=cilium-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/cilium-20220516225309-2444': mkdir /var/lib/docker/volumes/cilium-20220516225309-2444: read-only file system
	I0516 23:03:14.633262    5264 cli_runner.go:164] Run: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:15.727022    5264 cli_runner.go:211] docker container inspect cilium-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:15.727065    5264 cli_runner.go:217] Completed: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: (1.0936675s)
	I0516 23:03:15.727206    5264 delete.go:82] Unable to get host status for cilium-20220516225309-2444, assuming it has already been deleted: state: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	W0516 23:03:15.727330    5264 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for cilium-20220516225309-2444 container: docker volume create cilium-20220516225309-2444 --label name.minikube.sigs.k8s.io=cilium-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/cilium-20220516225309-2444': mkdir /var/lib/docker/volumes/cilium-20220516225309-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for cilium-20220516225309-2444 container: docker volume create cilium-20220516225309-2444 --label name.minikube.sigs.k8s.io=cilium-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/cilium-20220516225309-2444': mkdir /var/lib/docker/volumes/cilium-20220516225309-2444: read-only file system
	
	I0516 23:03:15.727330    5264 start.go:623] Will try again in 5 seconds ...
	I0516 23:03:20.730596    5264 start.go:352] acquiring machines lock for cilium-20220516225309-2444: {Name:mk41808534544680dd00277d74c72f2cb58e2b20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:03:20.731052    5264 start.go:356] acquired machines lock for "cilium-20220516225309-2444" in 231.9µs
	I0516 23:03:20.731262    5264 start.go:94] Skipping create...Using existing machine configuration
	I0516 23:03:20.731262    5264 fix.go:55] fixHost starting: 
	I0516 23:03:20.747276    5264 cli_runner.go:164] Run: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:21.852032    5264 cli_runner.go:211] docker container inspect cilium-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:21.852032    5264 cli_runner.go:217] Completed: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: (1.1035775s)
	I0516 23:03:21.852117    5264 fix.go:103] recreateIfNeeded on cilium-20220516225309-2444: state= err=unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:21.852117    5264 fix.go:108] machineExists: false. err=machine does not exist
	I0516 23:03:21.862065    5264 out.go:177] * docker "cilium-20220516225309-2444" container is missing, will recreate.
	I0516 23:03:21.863739    5264 delete.go:124] DEMOLISHING cilium-20220516225309-2444 ...
	I0516 23:03:21.885553    5264 cli_runner.go:164] Run: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:22.965902    5264 cli_runner.go:211] docker container inspect cilium-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:22.966007    5264 cli_runner.go:217] Completed: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: (1.0801255s)
	W0516 23:03:22.966007    5264 stop.go:75] unable to get state: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:22.966007    5264 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:22.989355    5264 cli_runner.go:164] Run: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:24.145292    5264 cli_runner.go:211] docker container inspect cilium-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:24.145338    5264 cli_runner.go:217] Completed: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: (1.1557892s)
	I0516 23:03:24.145440    5264 delete.go:82] Unable to get host status for cilium-20220516225309-2444, assuming it has already been deleted: state: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:24.154280    5264 cli_runner.go:164] Run: docker container inspect -f {{.Id}} cilium-20220516225309-2444
	W0516 23:03:25.247695    5264 cli_runner.go:211] docker container inspect -f {{.Id}} cilium-20220516225309-2444 returned with exit code 1
	I0516 23:03:25.247695    5264 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} cilium-20220516225309-2444: (1.0934064s)
	I0516 23:03:25.247695    5264 kic.go:356] could not find the container cilium-20220516225309-2444 to remove it. will try anyways
	I0516 23:03:25.254680    5264 cli_runner.go:164] Run: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:26.346477    5264 cli_runner.go:211] docker container inspect cilium-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:26.346477    5264 cli_runner.go:217] Completed: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: (1.0917882s)
	W0516 23:03:26.346477    5264 oci.go:84] error getting container status, will try to delete anyways: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:26.354436    5264 cli_runner.go:164] Run: docker exec --privileged -t cilium-20220516225309-2444 /bin/bash -c "sudo init 0"
	W0516 23:03:27.454378    5264 cli_runner.go:211] docker exec --privileged -t cilium-20220516225309-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 23:03:27.454431    5264 cli_runner.go:217] Completed: docker exec --privileged -t cilium-20220516225309-2444 /bin/bash -c "sudo init 0": (1.0998422s)
	I0516 23:03:27.454566    5264 oci.go:641] error shutdown cilium-20220516225309-2444: docker exec --privileged -t cilium-20220516225309-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:28.465017    5264 cli_runner.go:164] Run: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:29.525865    5264 cli_runner.go:211] docker container inspect cilium-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:29.525865    5264 cli_runner.go:217] Completed: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: (1.0608027s)
	I0516 23:03:29.525865    5264 oci.go:653] temporary error verifying shutdown: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:29.525865    5264 oci.go:655] temporary error: container cilium-20220516225309-2444 status is  but expect it to be exited
	I0516 23:03:29.525865    5264 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:30.009565    5264 cli_runner.go:164] Run: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:31.092195    5264 cli_runner.go:211] docker container inspect cilium-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:31.092195    5264 cli_runner.go:217] Completed: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: (1.0825744s)
	I0516 23:03:31.092195    5264 oci.go:653] temporary error verifying shutdown: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:31.092195    5264 oci.go:655] temporary error: container cilium-20220516225309-2444 status is  but expect it to be exited
	I0516 23:03:31.092195    5264 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:31.991732    5264 cli_runner.go:164] Run: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:33.068943    5264 cli_runner.go:211] docker container inspect cilium-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:33.069006    5264 cli_runner.go:217] Completed: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: (1.0772021s)
	I0516 23:03:33.069106    5264 oci.go:653] temporary error verifying shutdown: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:33.069168    5264 oci.go:655] temporary error: container cilium-20220516225309-2444 status is  but expect it to be exited
	I0516 23:03:33.069208    5264 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:33.720889    5264 cli_runner.go:164] Run: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:34.807297    5264 cli_runner.go:211] docker container inspect cilium-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:34.807297    5264 cli_runner.go:217] Completed: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: (1.0863989s)
	I0516 23:03:34.807297    5264 oci.go:653] temporary error verifying shutdown: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:34.807297    5264 oci.go:655] temporary error: container cilium-20220516225309-2444 status is  but expect it to be exited
	I0516 23:03:34.807297    5264 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:35.932520    5264 cli_runner.go:164] Run: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:36.962795    5264 cli_runner.go:211] docker container inspect cilium-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:36.962844    5264 cli_runner.go:217] Completed: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: (1.0295508s)
	I0516 23:03:36.962921    5264 oci.go:653] temporary error verifying shutdown: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:36.962975    5264 oci.go:655] temporary error: container cilium-20220516225309-2444 status is  but expect it to be exited
	I0516 23:03:36.962975    5264 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:38.497637    5264 cli_runner.go:164] Run: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:39.567463    5264 cli_runner.go:211] docker container inspect cilium-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:39.567463    5264 cli_runner.go:217] Completed: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: (1.0696703s)
	I0516 23:03:39.567639    5264 oci.go:653] temporary error verifying shutdown: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:39.567639    5264 oci.go:655] temporary error: container cilium-20220516225309-2444 status is  but expect it to be exited
	I0516 23:03:39.567639    5264 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:42.630968    5264 cli_runner.go:164] Run: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:43.706832    5264 cli_runner.go:211] docker container inspect cilium-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:43.706904    5264 cli_runner.go:217] Completed: docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: (1.0757427s)
	I0516 23:03:43.707014    5264 oci.go:653] temporary error verifying shutdown: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:43.707014    5264 oci.go:655] temporary error: container cilium-20220516225309-2444 status is  but expect it to be exited
	I0516 23:03:43.707106    5264 oci.go:88] couldn't shut down cilium-20220516225309-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "cilium-20220516225309-2444": docker container inspect cilium-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	 
	I0516 23:03:43.719052    5264 cli_runner.go:164] Run: docker rm -f -v cilium-20220516225309-2444
	I0516 23:03:44.828171    5264 cli_runner.go:217] Completed: docker rm -f -v cilium-20220516225309-2444: (1.1091097s)
	I0516 23:03:44.839436    5264 cli_runner.go:164] Run: docker container inspect -f {{.Id}} cilium-20220516225309-2444
	W0516 23:03:45.960341    5264 cli_runner.go:211] docker container inspect -f {{.Id}} cilium-20220516225309-2444 returned with exit code 1
	I0516 23:03:45.960341    5264 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} cilium-20220516225309-2444: (1.1207924s)
	I0516 23:03:45.968337    5264 cli_runner.go:164] Run: docker network inspect cilium-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:03:47.082008    5264 cli_runner.go:211] docker network inspect cilium-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:03:47.082008    5264 cli_runner.go:217] Completed: docker network inspect cilium-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1132699s)
	I0516 23:03:47.089878    5264 network_create.go:272] running [docker network inspect cilium-20220516225309-2444] to gather additional debugging logs...
	I0516 23:03:47.089878    5264 cli_runner.go:164] Run: docker network inspect cilium-20220516225309-2444
	W0516 23:03:48.125335    5264 cli_runner.go:211] docker network inspect cilium-20220516225309-2444 returned with exit code 1
	I0516 23:03:48.125397    5264 cli_runner.go:217] Completed: docker network inspect cilium-20220516225309-2444: (1.0353251s)
	I0516 23:03:48.125471    5264 network_create.go:275] error running [docker network inspect cilium-20220516225309-2444]: docker network inspect cilium-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220516225309-2444
	I0516 23:03:48.125471    5264 network_create.go:277] output of [docker network inspect cilium-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220516225309-2444
	
	** /stderr **
	W0516 23:03:48.126639    5264 delete.go:139] delete failed (probably ok) <nil>
	I0516 23:03:48.126704    5264 fix.go:115] Sleeping 1 second for extra luck!
	I0516 23:03:49.134279    5264 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:03:49.139608    5264 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:03:49.139983    5264 start.go:165] libmachine.API.Create for "cilium-20220516225309-2444" (driver="docker")
	I0516 23:03:49.139983    5264 client.go:168] LocalClient.Create starting
	I0516 23:03:49.140782    5264 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:03:49.140882    5264 main.go:134] libmachine: Decoding PEM data...
	I0516 23:03:49.140882    5264 main.go:134] libmachine: Parsing certificate...
	I0516 23:03:49.140882    5264 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:03:49.140882    5264 main.go:134] libmachine: Decoding PEM data...
	I0516 23:03:49.140882    5264 main.go:134] libmachine: Parsing certificate...
	I0516 23:03:49.153202    5264 cli_runner.go:164] Run: docker network inspect cilium-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:03:50.237018    5264 cli_runner.go:211] docker network inspect cilium-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:03:50.237018    5264 cli_runner.go:217] Completed: docker network inspect cilium-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0838067s)
	I0516 23:03:50.244995    5264 network_create.go:272] running [docker network inspect cilium-20220516225309-2444] to gather additional debugging logs...
	I0516 23:03:50.244995    5264 cli_runner.go:164] Run: docker network inspect cilium-20220516225309-2444
	W0516 23:03:51.365425    5264 cli_runner.go:211] docker network inspect cilium-20220516225309-2444 returned with exit code 1
	I0516 23:03:51.365506    5264 cli_runner.go:217] Completed: docker network inspect cilium-20220516225309-2444: (1.1203603s)
	I0516 23:03:51.365573    5264 network_create.go:275] error running [docker network inspect cilium-20220516225309-2444]: docker network inspect cilium-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220516225309-2444
	I0516 23:03:51.365573    5264 network_create.go:277] output of [docker network inspect cilium-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220516225309-2444
	
	** /stderr **
	I0516 23:03:51.377536    5264 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:03:52.468375    5264 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0907689s)
	I0516 23:03:52.488343    5264 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000114ab8] amended:true}} dirty:map[192.168.49.0:0xc000114ab8 192.168.58.0:0xc0000063e8 192.168.67.0:0xc000646558 192.168.76.0:0xc000114bf0] misses:2}
	I0516 23:03:52.488343    5264 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:52.506347    5264 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000114ab8] amended:true}} dirty:map[192.168.49.0:0xc000114ab8 192.168.58.0:0xc0000063e8 192.168.67.0:0xc000646558 192.168.76.0:0xc000114bf0] misses:3}
	I0516 23:03:52.506347    5264 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:52.527045    5264 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000114ab8 192.168.58.0:0xc0000063e8 192.168.67.0:0xc000646558 192.168.76.0:0xc000114bf0] amended:false}} dirty:map[] misses:0}
	I0516 23:03:52.527180    5264 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:52.543935    5264 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000114ab8 192.168.58.0:0xc0000063e8 192.168.67.0:0xc000646558 192.168.76.0:0xc000114bf0] amended:false}} dirty:map[] misses:0}
	I0516 23:03:52.543935    5264 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:52.563419    5264 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000114ab8 192.168.58.0:0xc0000063e8 192.168.67.0:0xc000646558 192.168.76.0:0xc000114bf0] amended:true}} dirty:map[192.168.49.0:0xc000114ab8 192.168.58.0:0xc0000063e8 192.168.67.0:0xc000646558 192.168.76.0:0xc000114bf0 192.168.85.0:0xc000646608] misses:0}
	I0516 23:03:52.564019    5264 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:52.564019    5264 network_create.go:115] attempt to create docker network cilium-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 23:03:52.572613    5264 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444
	W0516 23:03:53.657175    5264 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444 returned with exit code 1
	I0516 23:03:53.657175    5264 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444: (1.0845524s)
	E0516 23:03:53.657175    5264 network_create.go:104] error while trying to create docker network cilium-20220516225309-2444 192.168.85.0/24: create docker network cilium-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1557d64422515fab8533dceaa5d002e03e358efd4a8e1e4337b77d7b81b8f4ef (br-1557d6442251): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 23:03:53.657175    5264 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cilium-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1557d64422515fab8533dceaa5d002e03e358efd4a8e1e4337b77d7b81b8f4ef (br-1557d6442251): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cilium-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1557d64422515fab8533dceaa5d002e03e358efd4a8e1e4337b77d7b81b8f4ef (br-1557d6442251): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 23:03:53.676776    5264 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:03:54.835510    5264 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1587245s)
	I0516 23:03:54.843824    5264 cli_runner.go:164] Run: docker volume create cilium-20220516225309-2444 --label name.minikube.sigs.k8s.io=cilium-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:03:55.925761    5264 cli_runner.go:211] docker volume create cilium-20220516225309-2444 --label name.minikube.sigs.k8s.io=cilium-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:03:55.925761    5264 cli_runner.go:217] Completed: docker volume create cilium-20220516225309-2444 --label name.minikube.sigs.k8s.io=cilium-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0819269s)
	I0516 23:03:55.925761    5264 client.go:171] LocalClient.Create took 6.7857195s
	I0516 23:03:57.941214    5264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:03:57.947534    5264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444
	W0516 23:03:59.046623    5264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444 returned with exit code 1
	I0516 23:03:59.046683    5264 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: (1.0989072s)
	I0516 23:03:59.046794    5264 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:03:59.398929    5264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444
	W0516 23:04:00.462298    5264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444 returned with exit code 1
	I0516 23:04:00.462298    5264 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: (1.0633593s)
	W0516 23:04:00.462298    5264 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	
	W0516 23:04:00.462298    5264 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:04:00.472936    5264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:04:00.481518    5264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444
	W0516 23:04:01.553158    5264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444 returned with exit code 1
	I0516 23:04:01.553311    5264 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: (1.0714442s)
	I0516 23:04:01.553474    5264 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:04:01.785455    5264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444
	W0516 23:04:02.854036    5264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444 returned with exit code 1
	I0516 23:04:02.854084    5264 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: (1.0684247s)
	W0516 23:04:02.854236    5264 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	
	W0516 23:04:02.854335    5264 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:04:02.854335    5264 start.go:134] duration metric: createHost completed in 13.7199376s
	I0516 23:04:02.866622    5264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:04:02.874848    5264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444
	W0516 23:04:03.983619    5264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444 returned with exit code 1
	I0516 23:04:03.983667    5264 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: (1.1086896s)
	I0516 23:04:03.983914    5264 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:04:04.252298    5264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444
	W0516 23:04:05.396795    5264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444 returned with exit code 1
	I0516 23:04:05.396795    5264 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: (1.1444867s)
	W0516 23:04:05.396795    5264 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	
	W0516 23:04:05.396795    5264 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:04:05.408570    5264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:04:05.416566    5264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444
	W0516 23:04:06.528854    5264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444 returned with exit code 1
	I0516 23:04:06.528906    5264 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: (1.1121289s)
	I0516 23:04:06.528980    5264 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:04:06.744738    5264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444
	W0516 23:04:07.873648    5264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444 returned with exit code 1
	I0516 23:04:07.873648    5264 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: (1.1288423s)
	W0516 23:04:07.873863    5264 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	
	W0516 23:04:07.873863    5264 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220516225309-2444
	I0516 23:04:07.873863    5264 fix.go:57] fixHost completed within 47.142195s
	I0516 23:04:07.873926    5264 start.go:81] releasing machines lock for "cilium-20220516225309-2444", held for 47.1424366s
	W0516 23:04:07.874472    5264 out.go:239] * Failed to start docker container. Running "minikube delete -p cilium-20220516225309-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cilium-20220516225309-2444 container: docker volume create cilium-20220516225309-2444 --label name.minikube.sigs.k8s.io=cilium-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/cilium-20220516225309-2444': mkdir /var/lib/docker/volumes/cilium-20220516225309-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p cilium-20220516225309-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cilium-20220516225309-2444 container: docker volume create cilium-20220516225309-2444 --label name.minikube.sigs.k8s.io=cilium-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/cilium-20220516225309-2444': mkdir /var/lib/docker/volumes/cilium-20220516225309-2444: read-only file system
	
	I0516 23:04:07.879310    5264 out.go:177] 
	W0516 23:04:07.881900    5264 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cilium-20220516225309-2444 container: docker volume create cilium-20220516225309-2444 --label name.minikube.sigs.k8s.io=cilium-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/cilium-20220516225309-2444': mkdir /var/lib/docker/volumes/cilium-20220516225309-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cilium-20220516225309-2444 container: docker volume create cilium-20220516225309-2444 --label name.minikube.sigs.k8s.io=cilium-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/cilium-20220516225309-2444': mkdir /var/lib/docker/volumes/cilium-20220516225309-2444: read-only file system
	
	W0516 23:04:07.882146    5264 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 23:04:07.882256    5264 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 23:04:07.886252    5264 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/cilium/Start (81.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444: exit status 7 (2.9288393s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:02:57.505049    1756 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220516230045-2444 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220516230045-2444 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.0904405s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220516230045-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220516230045-2444: exit status 1 (1.2350847s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444: exit status 7 (3.0229703s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:03:04.864129    7564 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220516230045-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (10.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220516230100-2444 -n newest-cni-20220516230100-2444

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220516230100-2444 -n newest-cni-20220516230100-2444: exit status 7 (2.9991867s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:02:59.829349    8548 status.go:247] status error: host: state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444

                                                
                                                
** /stderr **
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220516230100-2444 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220516230100-2444 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.1221626s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220516230100-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220516230100-2444: exit status 1 (1.1708456s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220516230100-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220516230100-2444 -n newest-cni-20220516230100-2444

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220516230100-2444 -n newest-cni-20220516230100-2444: exit status 7 (3.0758535s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:03:07.208966    7204 status.go:247] status error: host: state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220516230100-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (10.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (122.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220516230045-2444 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220516230045-2444 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m58.1365706s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220516230045-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node default-k8s-different-port-20220516230045-2444 in cluster default-k8s-different-port-20220516230045-2444
	* Pulling base image ...
	* docker "default-k8s-different-port-20220516230045-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "default-k8s-different-port-20220516230045-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:03:05.122912    7620 out.go:296] Setting OutFile to fd 1568 ...
	I0516 23:03:05.193416    7620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:03:05.193416    7620 out.go:309] Setting ErrFile to fd 1780...
	I0516 23:03:05.193416    7620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:03:05.204880    7620 out.go:303] Setting JSON to false
	I0516 23:03:05.206951    7620 start.go:115] hostinfo: {"hostname":"minikube2","uptime":5297,"bootTime":1652736888,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 23:03:05.207696    7620 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 23:03:05.211555    7620 out.go:177] * [default-k8s-different-port-20220516230045-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 23:03:05.215424    7620 notify.go:193] Checking for updates...
	I0516 23:03:05.217969    7620 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 23:03:05.221039    7620 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 23:03:05.225420    7620 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 23:03:05.228947    7620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 23:03:05.232078    7620 config.go:178] Loaded profile config "default-k8s-different-port-20220516230045-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:03:05.232900    7620 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 23:03:07.982924    7620 docker.go:137] docker version: linux-20.10.14
	I0516 23:03:07.994979    7620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:03:10.052889    7620 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0578341s)
	I0516 23:03:10.053676    7620 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:03:09.0082713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:03:10.059132    7620 out.go:177] * Using the docker driver based on existing profile
	I0516 23:03:10.061342    7620 start.go:284] selected driver: docker
	I0516 23:03:10.061386    7620 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220516230045-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220516230045-2444 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subn
et: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 23:03:10.061557    7620 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 23:03:10.140082    7620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:03:12.358412    7620 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2181936s)
	I0516 23:03:12.358874    7620 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:03:11.2391752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:03:12.359381    7620 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 23:03:12.359381    7620 cni.go:95] Creating CNI manager for ""
	I0516 23:03:12.359381    7620 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 23:03:12.359381    7620 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220516230045-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220516230045-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 23:03:12.364583    7620 out.go:177] * Starting control plane node default-k8s-different-port-20220516230045-2444 in cluster default-k8s-different-port-20220516230045-2444
	I0516 23:03:12.366481    7620 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 23:03:12.369790    7620 out.go:177] * Pulling base image ...
	I0516 23:03:12.371438    7620 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 23:03:12.371438    7620 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 23:03:12.371438    7620 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 23:03:12.372129    7620 cache.go:57] Caching tarball of preloaded images
	I0516 23:03:12.372163    7620 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 23:03:12.372692    7620 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 23:03:12.372751    7620 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-different-port-20220516230045-2444\config.json ...
	I0516 23:03:13.516175    7620 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 23:03:13.516175    7620 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:03:13.516175    7620 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:03:13.516175    7620 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 23:03:13.516175    7620 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 23:03:13.516175    7620 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 23:03:13.516842    7620 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 23:03:13.516842    7620 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 23:03:13.516944    7620 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:03:15.885218    7620 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 23:03:15.885218    7620 cache.go:206] Successfully downloaded all kic artifacts
	I0516 23:03:15.885837    7620 start.go:352] acquiring machines lock for default-k8s-different-port-20220516230045-2444: {Name:mkca2c0574e16790f4d61bb6412ca78505ef9070 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:03:15.886080    7620 start.go:356] acquired machines lock for "default-k8s-different-port-20220516230045-2444" in 177.9µs
	I0516 23:03:15.886080    7620 start.go:94] Skipping create...Using existing machine configuration
	I0516 23:03:15.886080    7620 fix.go:55] fixHost starting: 
	I0516 23:03:15.910488    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:03:16.996746    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:16.996746    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0861689s)
	I0516 23:03:16.996746    7620 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220516230045-2444: state= err=unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:16.996746    7620 fix.go:108] machineExists: false. err=machine does not exist
	I0516 23:03:17.001094    7620 out.go:177] * docker "default-k8s-different-port-20220516230045-2444" container is missing, will recreate.
	I0516 23:03:17.003411    7620 delete.go:124] DEMOLISHING default-k8s-different-port-20220516230045-2444 ...
	I0516 23:03:17.020747    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:03:18.072091    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:18.072091    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0513349s)
	W0516 23:03:18.072091    7620 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:18.072091    7620 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:18.087078    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:03:19.178900    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:19.178900    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0918129s)
	I0516 23:03:19.178900    7620 delete.go:82] Unable to get host status for default-k8s-different-port-20220516230045-2444, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:19.186904    7620 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444
	W0516 23:03:20.237699    7620 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:03:20.237699    7620 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444: (1.0498262s)
	I0516 23:03:20.237699    7620 kic.go:356] could not find the container default-k8s-different-port-20220516230045-2444 to remove it. will try anyways
	I0516 23:03:20.244691    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:03:21.390922    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:21.390922    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.1461455s)
	W0516 23:03:21.390922    7620 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:21.399244    7620 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-different-port-20220516230045-2444 /bin/bash -c "sudo init 0"
	W0516 23:03:22.527702    7620 cli_runner.go:211] docker exec --privileged -t default-k8s-different-port-20220516230045-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 23:03:22.527702    7620 cli_runner.go:217] Completed: docker exec --privileged -t default-k8s-different-port-20220516230045-2444 /bin/bash -c "sudo init 0": (1.1284476s)
	I0516 23:03:22.527702    7620 oci.go:641] error shutdown default-k8s-different-port-20220516230045-2444: docker exec --privileged -t default-k8s-different-port-20220516230045-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:23.539784    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:03:24.701761    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:24.701761    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.1613466s)
	I0516 23:03:24.701761    7620 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:24.701761    7620 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:03:24.701761    7620 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:25.270676    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:03:26.362441    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:26.362441    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0917564s)
	I0516 23:03:26.362441    7620 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:26.362441    7620 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:03:26.362441    7620 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:27.463199    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:03:28.516676    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:28.516676    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0534684s)
	I0516 23:03:28.516676    7620 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:28.516676    7620 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:03:28.516676    7620 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:29.836927    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:03:30.938395    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:30.938453    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.1014591s)
	I0516 23:03:30.938574    7620 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:30.938574    7620 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:03:30.938660    7620 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:32.532604    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:03:33.649929    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:33.649929    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.1173148s)
	I0516 23:03:33.649929    7620 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:33.649929    7620 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:03:33.649929    7620 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:36.010401    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:03:37.087409    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:37.087502    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0768548s)
	I0516 23:03:37.087659    7620 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:37.087729    7620 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:03:37.087729    7620 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:41.619034    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:03:42.681137    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:42.681281    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0620078s)
	I0516 23:03:42.681281    7620 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:03:42.681281    7620 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:03:42.681281    7620 oci.go:88] couldn't shut down default-k8s-different-port-20220516230045-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	 
	I0516 23:03:42.691378    7620 cli_runner.go:164] Run: docker rm -f -v default-k8s-different-port-20220516230045-2444
	I0516 23:03:43.770814    7620 cli_runner.go:217] Completed: docker rm -f -v default-k8s-different-port-20220516230045-2444: (1.0793302s)
	I0516 23:03:43.780209    7620 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444
	W0516 23:03:44.891332    7620 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:03:44.891332    7620 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444: (1.1111135s)
	I0516 23:03:44.901336    7620 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:03:45.991392    7620 cli_runner.go:211] docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:03:45.991392    7620 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0900462s)
	I0516 23:03:45.998375    7620 network_create.go:272] running [docker network inspect default-k8s-different-port-20220516230045-2444] to gather additional debugging logs...
	I0516 23:03:45.999379    7620 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220516230045-2444
	W0516 23:03:47.096887    7620 cli_runner.go:211] docker network inspect default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:03:47.096887    7620 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220516230045-2444: (1.0974984s)
	I0516 23:03:47.096887    7620 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220516230045-2444]: docker network inspect default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220516230045-2444
	I0516 23:03:47.096887    7620 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220516230045-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220516230045-2444
	
	** /stderr **
	W0516 23:03:47.097897    7620 delete.go:139] delete failed (probably ok) <nil>
	I0516 23:03:47.097897    7620 fix.go:115] Sleeping 1 second for extra luck!
	I0516 23:03:48.109559    7620 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:03:48.113179    7620 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 23:03:48.113905    7620 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220516230045-2444" (driver="docker")
	I0516 23:03:48.114025    7620 client.go:168] LocalClient.Create starting
	I0516 23:03:48.114587    7620 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:03:48.114587    7620 main.go:134] libmachine: Decoding PEM data...
	I0516 23:03:48.114587    7620 main.go:134] libmachine: Parsing certificate...
	I0516 23:03:48.114587    7620 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:03:48.115161    7620 main.go:134] libmachine: Decoding PEM data...
	I0516 23:03:48.115229    7620 main.go:134] libmachine: Parsing certificate...
	I0516 23:03:48.124359    7620 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:03:49.241402    7620 cli_runner.go:211] docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:03:49.241502    7620 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1168907s)
	I0516 23:03:49.252191    7620 network_create.go:272] running [docker network inspect default-k8s-different-port-20220516230045-2444] to gather additional debugging logs...
	I0516 23:03:49.252191    7620 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220516230045-2444
	W0516 23:03:50.314846    7620 cli_runner.go:211] docker network inspect default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:03:50.314908    7620 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220516230045-2444: (1.0624584s)
	I0516 23:03:50.314908    7620 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220516230045-2444]: docker network inspect default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220516230045-2444
	I0516 23:03:50.314908    7620 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220516230045-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220516230045-2444
	
	** /stderr **
	I0516 23:03:50.325398    7620 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:03:51.411347    7620 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0858504s)
	I0516 23:03:51.430446    7620 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00014e258] misses:0}
	I0516 23:03:51.430446    7620 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:51.430446    7620 network_create.go:115] attempt to create docker network default-k8s-different-port-20220516230045-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 23:03:51.439062    7620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444
	W0516 23:03:52.547108    7620 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:03:52.547156    7620 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: (1.1079939s)
	W0516 23:03:52.547156    7620 network_create.go:107] failed to create docker network default-k8s-different-port-20220516230045-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 23:03:52.566499    7620 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e258] amended:false}} dirty:map[] misses:0}
	I0516 23:03:52.566499    7620 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:52.583906    7620 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e258] amended:true}} dirty:map[192.168.49.0:0xc00014e258 192.168.58.0:0xc0007424e0] misses:0}
	I0516 23:03:52.583906    7620 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:52.583906    7620 network_create.go:115] attempt to create docker network default-k8s-different-port-20220516230045-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 23:03:52.591902    7620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444
	W0516 23:03:53.689226    7620 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:03:53.689226    7620 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: (1.0973142s)
	W0516 23:03:53.689226    7620 network_create.go:107] failed to create docker network default-k8s-different-port-20220516230045-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 23:03:53.710430    7620 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e258] amended:true}} dirty:map[192.168.49.0:0xc00014e258 192.168.58.0:0xc0007424e0] misses:1}
	I0516 23:03:53.710430    7620 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:53.729231    7620 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e258] amended:true}} dirty:map[192.168.49.0:0xc00014e258 192.168.58.0:0xc0007424e0 192.168.67.0:0xc00014e3c0] misses:1}
	I0516 23:03:53.729231    7620 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:53.729231    7620 network_create.go:115] attempt to create docker network default-k8s-different-port-20220516230045-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 23:03:53.741499    7620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444
	W0516 23:03:54.882829    7620 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:03:54.882829    7620 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: (1.1413198s)
	W0516 23:03:54.882829    7620 network_create.go:107] failed to create docker network default-k8s-different-port-20220516230045-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 23:03:54.899417    7620 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e258] amended:true}} dirty:map[192.168.49.0:0xc00014e258 192.168.58.0:0xc0007424e0 192.168.67.0:0xc00014e3c0] misses:2}
	I0516 23:03:54.900060    7620 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:54.916197    7620 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e258] amended:true}} dirty:map[192.168.49.0:0xc00014e258 192.168.58.0:0xc0007424e0 192.168.67.0:0xc00014e3c0 192.168.76.0:0xc0005c8420] misses:2}
	I0516 23:03:54.916197    7620 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:54.916197    7620 network_create.go:115] attempt to create docker network default-k8s-different-port-20220516230045-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 23:03:54.927687    7620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444
	W0516 23:03:56.002668    7620 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:03:56.002749    7620 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: (1.0747654s)
	E0516 23:03:56.002866    7620 network_create.go:104] error while trying to create docker network default-k8s-different-port-20220516230045-2444 192.168.76.0/24: create docker network default-k8s-different-port-20220516230045-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network fb56ab15ca90372cd2a312c8cc664664e89b722983e32d02e4fca79a80191b19 (br-fb56ab15ca90): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 23:03:56.003080    7620 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220516230045-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network fb56ab15ca90372cd2a312c8cc664664e89b722983e32d02e4fca79a80191b19 (br-fb56ab15ca90): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220516230045-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network fb56ab15ca90372cd2a312c8cc664664e89b722983e32d02e4fca79a80191b19 (br-fb56ab15ca90): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 23:03:56.025125    7620 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:03:57.121606    7620 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0961861s)
	I0516 23:03:57.131376    7620 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:03:58.179335    7620 cli_runner.go:211] docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:03:58.179427    7620 cli_runner.go:217] Completed: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0479498s)
	I0516 23:03:58.179511    7620 client.go:171] LocalClient.Create took 10.0653992s
	I0516 23:04:00.193314    7620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:04:00.212037    7620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:04:01.287471    7620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:01.287620    7620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.0752238s)
	I0516 23:04:01.287886    7620 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:01.471127    7620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:04:02.517558    7620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:02.517558    7620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.0463227s)
	W0516 23:04:02.517558    7620 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	W0516 23:04:02.517558    7620 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:02.529405    7620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:04:02.536147    7620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:04:03.669244    7620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:03.669312    7620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.1329441s)
	I0516 23:04:03.669377    7620 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:03.885968    7620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:04:05.022291    7620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:05.022291    7620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.1363133s)
	W0516 23:04:05.022291    7620 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	W0516 23:04:05.022291    7620 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:05.022291    7620 start.go:134] duration metric: createHost completed in 16.9125868s
	I0516 23:04:05.033289    7620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:04:05.041292    7620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:04:06.167755    7620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:06.167972    7620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.1264526s)
	I0516 23:04:06.168150    7620 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:06.507367    7620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:04:07.629284    7620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:07.629332    7620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.1218005s)
	W0516 23:04:07.629681    7620 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	W0516 23:04:07.629727    7620 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:07.641593    7620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:04:07.648263    7620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:04:08.766418    7620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:08.766571    7620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.1181033s)
	I0516 23:04:08.766746    7620 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:08.999557    7620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:04:10.150126    7620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:10.150126    7620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.1505582s)
	W0516 23:04:10.150126    7620 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	W0516 23:04:10.150126    7620 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:10.150126    7620 fix.go:57] fixHost completed within 54.2635781s
	I0516 23:04:10.150126    7620 start.go:81] releasing machines lock for "default-k8s-different-port-20220516230045-2444", held for 54.2635781s
	W0516 23:04:10.150126    7620 start.go:608] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220516230045-2444 container: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220516230045-2444: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444: read-only file system
	W0516 23:04:10.150126    7620 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220516230045-2444 container: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220516230045-2444: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220516230045-2444 container: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220516230045-2444: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444: read-only file system
	
	I0516 23:04:10.150126    7620 start.go:623] Will try again in 5 seconds ...
	I0516 23:04:15.153535    7620 start.go:352] acquiring machines lock for default-k8s-different-port-20220516230045-2444: {Name:mkca2c0574e16790f4d61bb6412ca78505ef9070 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:04:15.153535    7620 start.go:356] acquired machines lock for "default-k8s-different-port-20220516230045-2444" in 0s
	I0516 23:04:15.153535    7620 start.go:94] Skipping create...Using existing machine configuration
	I0516 23:04:15.153535    7620 fix.go:55] fixHost starting: 
	I0516 23:04:15.172055    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:04:16.276425    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:16.276511    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.1041843s)
	I0516 23:04:16.276563    7620 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220516230045-2444: state= err=unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:16.276563    7620 fix.go:108] machineExists: false. err=machine does not exist
	I0516 23:04:16.279694    7620 out.go:177] * docker "default-k8s-different-port-20220516230045-2444" container is missing, will recreate.
	I0516 23:04:16.282470    7620 delete.go:124] DEMOLISHING default-k8s-different-port-20220516230045-2444 ...
	I0516 23:04:16.302320    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:04:17.376712    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:17.376712    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0741946s)
	W0516 23:04:17.376712    7620 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:17.376712    7620 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:17.392381    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:04:18.451020    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:18.451020    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.05863s)
	I0516 23:04:18.451020    7620 delete.go:82] Unable to get host status for default-k8s-different-port-20220516230045-2444, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:18.458007    7620 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444
	W0516 23:04:19.531638    7620 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:19.531638    7620 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444: (1.0736217s)
	I0516 23:04:19.531638    7620 kic.go:356] could not find the container default-k8s-different-port-20220516230045-2444 to remove it. will try anyways
	I0516 23:04:19.540642    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:04:20.637736    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:20.637736    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0970844s)
	W0516 23:04:20.637736    7620 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:20.644732    7620 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-different-port-20220516230045-2444 /bin/bash -c "sudo init 0"
	W0516 23:04:21.699097    7620 cli_runner.go:211] docker exec --privileged -t default-k8s-different-port-20220516230045-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 23:04:21.699097    7620 cli_runner.go:217] Completed: docker exec --privileged -t default-k8s-different-port-20220516230045-2444 /bin/bash -c "sudo init 0": (1.0543557s)
	I0516 23:04:21.699097    7620 oci.go:641] error shutdown default-k8s-different-port-20220516230045-2444: docker exec --privileged -t default-k8s-different-port-20220516230045-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:22.709796    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:04:23.783827    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:23.783827    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0740216s)
	I0516 23:04:23.783827    7620 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:23.783827    7620 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:04:23.783827    7620 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:24.279238    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:04:25.359369    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:25.359533    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0801213s)
	I0516 23:04:25.359597    7620 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:25.359597    7620 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:04:25.359597    7620 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:25.966028    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:04:27.060117    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:27.060117    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0940796s)
	I0516 23:04:27.060117    7620 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:27.060117    7620 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:04:27.060117    7620 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:27.967111    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:04:29.033727    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:29.033727    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0665582s)
	I0516 23:04:29.033727    7620 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:29.033727    7620 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:04:29.033727    7620 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:31.032908    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:04:32.117330    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:32.117432    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0841599s)
	I0516 23:04:32.117526    7620 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:32.117589    7620 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:04:32.117640    7620 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:33.955451    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:04:35.051280    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:35.051280    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.0958192s)
	I0516 23:04:35.051280    7620 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:35.051280    7620 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:04:35.051280    7620 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:37.738194    7620 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:04:38.867117    7620 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:38.867243    7620 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (1.1283985s)
	I0516 23:04:38.867243    7620 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:38.867243    7620 oci.go:655] temporary error: container default-k8s-different-port-20220516230045-2444 status is  but expect it to be exited
	I0516 23:04:38.867243    7620 oci.go:88] couldn't shut down default-k8s-different-port-20220516230045-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	 
	I0516 23:04:38.875911    7620 cli_runner.go:164] Run: docker rm -f -v default-k8s-different-port-20220516230045-2444
	I0516 23:04:39.942797    7620 cli_runner.go:217] Completed: docker rm -f -v default-k8s-different-port-20220516230045-2444: (1.0668763s)
	I0516 23:04:39.949810    7620 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444
	W0516 23:04:41.023835    7620 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:41.023835    7620 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220516230045-2444: (1.0740159s)
	I0516 23:04:41.034572    7620 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:04:42.139528    7620 cli_runner.go:211] docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:04:42.139528    7620 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1049463s)
	I0516 23:04:42.146526    7620 network_create.go:272] running [docker network inspect default-k8s-different-port-20220516230045-2444] to gather additional debugging logs...
	I0516 23:04:42.146526    7620 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220516230045-2444
	W0516 23:04:43.213459    7620 cli_runner.go:211] docker network inspect default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:43.213653    7620 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220516230045-2444: (1.066751s)
	I0516 23:04:43.213653    7620 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220516230045-2444]: docker network inspect default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220516230045-2444
	I0516 23:04:43.213653    7620 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220516230045-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220516230045-2444
	
	** /stderr **
	W0516 23:04:43.214855    7620 delete.go:139] delete failed (probably ok) <nil>
	I0516 23:04:43.214855    7620 fix.go:115] Sleeping 1 second for extra luck!
	I0516 23:04:44.224944    7620 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:04:44.229953    7620 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 23:04:44.229953    7620 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220516230045-2444" (driver="docker")
	I0516 23:04:44.229953    7620 client.go:168] LocalClient.Create starting
	I0516 23:04:44.229953    7620 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:04:44.229953    7620 main.go:134] libmachine: Decoding PEM data...
	I0516 23:04:44.230942    7620 main.go:134] libmachine: Parsing certificate...
	I0516 23:04:44.230942    7620 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:04:44.230942    7620 main.go:134] libmachine: Decoding PEM data...
	I0516 23:04:44.230942    7620 main.go:134] libmachine: Parsing certificate...
	I0516 23:04:44.239938    7620 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:04:45.352358    7620 cli_runner.go:211] docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:04:45.352358    7620 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220516230045-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1124105s)
	I0516 23:04:45.359438    7620 network_create.go:272] running [docker network inspect default-k8s-different-port-20220516230045-2444] to gather additional debugging logs...
	I0516 23:04:45.359438    7620 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220516230045-2444
	W0516 23:04:46.448915    7620 cli_runner.go:211] docker network inspect default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:46.448915    7620 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220516230045-2444: (1.0894682s)
	I0516 23:04:46.448915    7620 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220516230045-2444]: docker network inspect default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220516230045-2444
	I0516 23:04:46.448915    7620 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220516230045-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220516230045-2444
	
	** /stderr **
	I0516 23:04:46.457074    7620 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:04:47.538190    7620 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.08096s)
	I0516 23:04:47.556562    7620 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e258] amended:true}} dirty:map[192.168.49.0:0xc00014e258 192.168.58.0:0xc0007424e0 192.168.67.0:0xc00014e3c0 192.168.76.0:0xc0005c8420] misses:2}
	I0516 23:04:47.556562    7620 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:47.574113    7620 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e258] amended:true}} dirty:map[192.168.49.0:0xc00014e258 192.168.58.0:0xc0007424e0 192.168.67.0:0xc00014e3c0 192.168.76.0:0xc0005c8420] misses:3}
	I0516 23:04:47.574113    7620 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:47.590115    7620 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e258 192.168.58.0:0xc0007424e0 192.168.67.0:0xc00014e3c0 192.168.76.0:0xc0005c8420] amended:false}} dirty:map[] misses:0}
	I0516 23:04:47.590115    7620 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:47.606121    7620 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e258 192.168.58.0:0xc0007424e0 192.168.67.0:0xc00014e3c0 192.168.76.0:0xc0005c8420] amended:false}} dirty:map[] misses:0}
	I0516 23:04:47.606121    7620 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:47.622118    7620 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e258 192.168.58.0:0xc0007424e0 192.168.67.0:0xc00014e3c0 192.168.76.0:0xc0005c8420] amended:true}} dirty:map[192.168.49.0:0xc00014e258 192.168.58.0:0xc0007424e0 192.168.67.0:0xc00014e3c0 192.168.76.0:0xc0005c8420 192.168.85.0:0xc00014e4c0] misses:0}
	I0516 23:04:47.622118    7620 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:47.622118    7620 network_create.go:115] attempt to create docker network default-k8s-different-port-20220516230045-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 23:04:47.630114    7620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444
	W0516 23:04:48.697355    7620 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:48.697355    7620 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: (1.0671569s)
	E0516 23:04:48.697355    7620 network_create.go:104] error while trying to create docker network default-k8s-different-port-20220516230045-2444 192.168.85.0/24: create docker network default-k8s-different-port-20220516230045-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 249be9552995f305e5105bc2f69eae894158a56634d8dff50216addef2eb562b (br-249be9552995): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 23:04:48.697355    7620 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220516230045-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 249be9552995f305e5105bc2f69eae894158a56634d8dff50216addef2eb562b (br-249be9552995): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220516230045-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 249be9552995f305e5105bc2f69eae894158a56634d8dff50216addef2eb562b (br-249be9552995): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 23:04:48.718019    7620 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:04:49.801767    7620 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0837389s)
	I0516 23:04:49.812544    7620 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:04:50.895844    7620 cli_runner.go:211] docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:04:50.895844    7620 cli_runner.go:217] Completed: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0830762s)
	I0516 23:04:50.895844    7620 client.go:171] LocalClient.Create took 6.6658325s
	I0516 23:04:52.922521    7620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:04:52.929250    7620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:04:54.002633    7620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:54.002633    7620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.0733737s)
	I0516 23:04:54.002633    7620 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:54.285288    7620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:04:55.359979    7620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:55.359979    7620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.0746815s)
	W0516 23:04:55.360347    7620 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	W0516 23:04:55.360415    7620 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:55.371214    7620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:04:55.379055    7620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:04:56.494459    7620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:56.494459    7620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.1153193s)
	I0516 23:04:56.494459    7620 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:56.707009    7620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:04:57.825934    7620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:57.826199    7620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.1189157s)
	W0516 23:04:57.826199    7620 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	W0516 23:04:57.826199    7620 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:57.826199    7620 start.go:134] duration metric: createHost completed in 13.6011361s
	I0516 23:04:57.839628    7620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:04:57.847913    7620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:04:58.961564    7620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:04:58.961564    7620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.1135059s)
	I0516 23:04:58.961564    7620 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:04:59.288095    7620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:05:00.402234    7620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:05:00.402234    7620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.1141291s)
	W0516 23:05:00.402234    7620 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	W0516 23:05:00.402234    7620 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:05:00.412243    7620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:05:00.420240    7620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:05:01.494294    7620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:05:01.494294    7620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.0739708s)
	I0516 23:05:01.494294    7620 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:05:01.854661    7620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444
	W0516 23:05:02.969557    7620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444 returned with exit code 1
	I0516 23:05:02.969636    7620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: (1.114848s)
	W0516 23:05:02.969945    7620 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	W0516 23:05:02.970026    7620 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220516230045-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220516230045-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	I0516 23:05:02.970026    7620 fix.go:57] fixHost completed within 47.8160756s
	I0516 23:05:02.970099    7620 start.go:81] releasing machines lock for "default-k8s-different-port-20220516230045-2444", held for 47.8160756s
	W0516 23:05:02.970709    7620 out.go:239] * Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220516230045-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220516230045-2444 container: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220516230045-2444: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220516230045-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220516230045-2444 container: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220516230045-2444: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444: read-only file system
	
	I0516 23:05:02.980295    7620 out.go:177] 
	W0516 23:05:02.983290    7620 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220516230045-2444 container: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220516230045-2444: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220516230045-2444 container: docker volume create default-k8s-different-port-20220516230045-2444 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220516230045-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220516230045-2444: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220516230045-2444: read-only file system
	
	W0516 23:05:02.983290    7620 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 23:05:02.983290    7620 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 23:05:02.989029    7620 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220516230045-2444 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220516230045-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220516230045-2444: exit status 1 (1.1828596s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444: exit status 7 (3.0646308s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:05:07.439531    2636 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220516230045-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (122.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-20220516225309-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-20220516225309-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: exit status 60 (1m21.8081936s)

                                                
                                                
-- stdout --
	* [calico-20220516225309-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node calico-20220516225309-2444 in cluster calico-20220516225309-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "calico-20220516225309-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:03:06.070560    7216 out.go:296] Setting OutFile to fd 1556 ...
	I0516 23:03:06.132127    7216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:03:06.132127    7216 out.go:309] Setting ErrFile to fd 1552...
	I0516 23:03:06.132211    7216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:03:06.145114    7216 out.go:303] Setting JSON to false
	I0516 23:03:06.146938    7216 start.go:115] hostinfo: {"hostname":"minikube2","uptime":5298,"bootTime":1652736888,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 23:03:06.146938    7216 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 23:03:06.150479    7216 out.go:177] * [calico-20220516225309-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 23:03:06.153714    7216 notify.go:193] Checking for updates...
	I0516 23:03:06.156874    7216 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 23:03:06.159192    7216 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 23:03:06.161613    7216 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 23:03:06.163729    7216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 23:03:06.166410    7216 config.go:178] Loaded profile config "cilium-20220516225309-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:03:06.167409    7216 config.go:178] Loaded profile config "default-k8s-different-port-20220516230045-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:03:06.167409    7216 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:03:06.167409    7216 config.go:178] Loaded profile config "newest-cni-20220516230100-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:03:06.167409    7216 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 23:03:09.044867    7216 docker.go:137] docker version: linux-20.10.14
	I0516 23:03:09.055456    7216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:03:11.272662    7216 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.217187s)
	I0516 23:03:11.272662    7216 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:03:10.138934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:03:11.276729    7216 out.go:177] * Using the docker driver based on user configuration
	I0516 23:03:11.278716    7216 start.go:284] selected driver: docker
	I0516 23:03:11.278716    7216 start.go:806] validating driver "docker" against <nil>
	I0516 23:03:11.278716    7216 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 23:03:11.343733    7216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:03:13.469300    7216 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1255494s)
	I0516 23:03:13.469300    7216 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:03:12.4040546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:03:13.469300    7216 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 23:03:13.470314    7216 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 23:03:13.473326    7216 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 23:03:13.475290    7216 cni.go:95] Creating CNI manager for "calico"
	I0516 23:03:13.475290    7216 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0516 23:03:13.475290    7216 start_flags.go:306] config:
	{Name:calico-20220516225309-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220516225309-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 23:03:13.478297    7216 out.go:177] * Starting control plane node calico-20220516225309-2444 in cluster calico-20220516225309-2444
	I0516 23:03:13.481297    7216 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 23:03:13.486296    7216 out.go:177] * Pulling base image ...
	I0516 23:03:13.488303    7216 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 23:03:13.488303    7216 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 23:03:13.488303    7216 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 23:03:13.488303    7216 cache.go:57] Caching tarball of preloaded images
	I0516 23:03:13.488303    7216 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 23:03:13.488303    7216 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 23:03:13.489298    7216 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-20220516225309-2444\config.json ...
	I0516 23:03:13.489298    7216 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-20220516225309-2444\config.json: {Name:mkb8c10e40b64b9bdc9f950eb250b2a4f4b721ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 23:03:14.600227    7216 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 23:03:14.600227    7216 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:03:14.600227    7216 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:03:14.600227    7216 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 23:03:14.600227    7216 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 23:03:14.600227    7216 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 23:03:14.600227    7216 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 23:03:14.600227    7216 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 23:03:14.600227    7216 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:03:16.976608    7216 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 23:03:16.976689    7216 cache.go:206] Successfully downloaded all kic artifacts
	I0516 23:03:16.976779    7216 start.go:352] acquiring machines lock for calico-20220516225309-2444: {Name:mk03b7eb6997909f8bddf03c46482793976e2f58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:03:16.976839    7216 start.go:356] acquired machines lock for "calico-20220516225309-2444" in 0s
	I0516 23:03:16.976839    7216 start.go:91] Provisioning new machine with config: &{Name:calico-20220516225309-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220516225309-2444 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 23:03:16.976839    7216 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:03:16.980392    7216 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:03:16.980392    7216 start.go:165] libmachine.API.Create for "calico-20220516225309-2444" (driver="docker")
	I0516 23:03:16.980392    7216 client.go:168] LocalClient.Create starting
	I0516 23:03:16.981350    7216 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:03:16.981532    7216 main.go:134] libmachine: Decoding PEM data...
	I0516 23:03:16.981532    7216 main.go:134] libmachine: Parsing certificate...
	I0516 23:03:16.981532    7216 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:03:16.981532    7216 main.go:134] libmachine: Decoding PEM data...
	I0516 23:03:16.981532    7216 main.go:134] libmachine: Parsing certificate...
	I0516 23:03:16.993168    7216 cli_runner.go:164] Run: docker network inspect calico-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:03:18.056729    7216 cli_runner.go:211] docker network inspect calico-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:03:18.056729    7216 cli_runner.go:217] Completed: docker network inspect calico-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0633469s)
	I0516 23:03:18.065078    7216 network_create.go:272] running [docker network inspect calico-20220516225309-2444] to gather additional debugging logs...
	I0516 23:03:18.065078    7216 cli_runner.go:164] Run: docker network inspect calico-20220516225309-2444
	W0516 23:03:19.178900    7216 cli_runner.go:211] docker network inspect calico-20220516225309-2444 returned with exit code 1
	I0516 23:03:19.178900    7216 cli_runner.go:217] Completed: docker network inspect calico-20220516225309-2444: (1.1138128s)
	I0516 23:03:19.178900    7216 network_create.go:275] error running [docker network inspect calico-20220516225309-2444]: docker network inspect calico-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220516225309-2444
	I0516 23:03:19.178900    7216 network_create.go:277] output of [docker network inspect calico-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220516225309-2444
	
	** /stderr **
	I0516 23:03:19.187864    7216 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:03:20.253688    7216 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.065815s)
	I0516 23:03:20.273706    7216 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00058e2a0] misses:0}
	I0516 23:03:20.274691    7216 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:20.274691    7216 network_create.go:115] attempt to create docker network calico-20220516225309-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 23:03:20.281700    7216 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444
	W0516 23:03:21.406020    7216 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444 returned with exit code 1
	I0516 23:03:21.406020    7216 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444: (1.1243101s)
	W0516 23:03:21.406020    7216 network_create.go:107] failed to create docker network calico-20220516225309-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 23:03:21.424014    7216 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058e2a0] amended:false}} dirty:map[] misses:0}
	I0516 23:03:21.424014    7216 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:21.442782    7216 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058e2a0] amended:true}} dirty:map[192.168.49.0:0xc00058e2a0 192.168.58.0:0xc000006950] misses:0}
	I0516 23:03:21.442782    7216 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:21.442782    7216 network_create.go:115] attempt to create docker network calico-20220516225309-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 23:03:21.451215    7216 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444
	W0516 23:03:22.559162    7216 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444 returned with exit code 1
	I0516 23:03:22.559227    7216 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444: (1.1078355s)
	W0516 23:03:22.559227    7216 network_create.go:107] failed to create docker network calico-20220516225309-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 23:03:22.578991    7216 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058e2a0] amended:true}} dirty:map[192.168.49.0:0xc00058e2a0 192.168.58.0:0xc000006950] misses:1}
	I0516 23:03:22.578991    7216 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:22.599331    7216 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058e2a0] amended:true}} dirty:map[192.168.49.0:0xc00058e2a0 192.168.58.0:0xc000006950 192.168.67.0:0xc000006a00] misses:1}
	I0516 23:03:22.599331    7216 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:22.599331    7216 network_create.go:115] attempt to create docker network calico-20220516225309-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 23:03:22.607728    7216 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444
	W0516 23:03:23.670781    7216 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444 returned with exit code 1
	I0516 23:03:23.670781    7216 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444: (1.0630434s)
	W0516 23:03:23.670781    7216 network_create.go:107] failed to create docker network calico-20220516225309-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 23:03:23.689776    7216 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058e2a0] amended:true}} dirty:map[192.168.49.0:0xc00058e2a0 192.168.58.0:0xc000006950 192.168.67.0:0xc000006a00] misses:2}
	I0516 23:03:23.689776    7216 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:23.725851    7216 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058e2a0] amended:true}} dirty:map[192.168.49.0:0xc00058e2a0 192.168.58.0:0xc000006950 192.168.67.0:0xc000006a00 192.168.76.0:0xc000006a98] misses:2}
	I0516 23:03:23.726471    7216 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:23.726648    7216 network_create.go:115] attempt to create docker network calico-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 23:03:23.733810    7216 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444
	W0516 23:03:24.919966    7216 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444 returned with exit code 1
	I0516 23:03:24.919966    7216 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444: (1.186146s)
	E0516 23:03:24.919966    7216 network_create.go:104] error while trying to create docker network calico-20220516225309-2444 192.168.76.0/24: create docker network calico-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5c96980a2818422909e7715e4723e2c9cf1330f8e13113cba6647666daaf2970 (br-5c96980a2818): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 23:03:24.919966    7216 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network calico-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5c96980a2818422909e7715e4723e2c9cf1330f8e13113cba6647666daaf2970 (br-5c96980a2818): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network calico-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5c96980a2818422909e7715e4723e2c9cf1330f8e13113cba6647666daaf2970 (br-5c96980a2818): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 23:03:24.938871    7216 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:03:26.081052    7216 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1421718s)
	I0516 23:03:26.088031    7216 cli_runner.go:164] Run: docker volume create calico-20220516225309-2444 --label name.minikube.sigs.k8s.io=calico-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:03:27.186168    7216 cli_runner.go:211] docker volume create calico-20220516225309-2444 --label name.minikube.sigs.k8s.io=calico-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:03:27.186428    7216 cli_runner.go:217] Completed: docker volume create calico-20220516225309-2444 --label name.minikube.sigs.k8s.io=calico-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0981276s)
	I0516 23:03:27.186505    7216 client.go:171] LocalClient.Create took 10.20599s
	I0516 23:03:29.211473    7216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:03:29.217830    7216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444
	W0516 23:03:30.328237    7216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444 returned with exit code 1
	I0516 23:03:30.328331    7216 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: (1.110397s)
	I0516 23:03:30.328519    7216 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:30.624033    7216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444
	W0516 23:03:31.726073    7216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444 returned with exit code 1
	I0516 23:03:31.726107    7216 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: (1.1018155s)
	W0516 23:03:31.726333    7216 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	
	W0516 23:03:31.726385    7216 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:31.737270    7216 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:03:31.744809    7216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444
	W0516 23:03:32.851407    7216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444 returned with exit code 1
	I0516 23:03:32.851445    7216 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: (1.1065546s)
	I0516 23:03:32.851503    7216 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:33.155497    7216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444
	W0516 23:03:34.250832    7216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444 returned with exit code 1
	I0516 23:03:34.251076    7216 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: (1.0953258s)
	W0516 23:03:34.251460    7216 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	
	W0516 23:03:34.251555    7216 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:34.251604    7216 start.go:134] duration metric: createHost completed in 17.2746162s
	I0516 23:03:34.251653    7216 start.go:81] releasing machines lock for "calico-20220516225309-2444", held for 17.2746162s
	W0516 23:03:34.251702    7216 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for calico-20220516225309-2444 container: docker volume create calico-20220516225309-2444 --label name.minikube.sigs.k8s.io=calico-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/calico-20220516225309-2444': mkdir /var/lib/docker/volumes/calico-20220516225309-2444: read-only file system
	I0516 23:03:34.269987    7216 cli_runner.go:164] Run: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:35.309654    7216 cli_runner.go:211] docker container inspect calico-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:35.309654    7216 cli_runner.go:217] Completed: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: (1.0396582s)
	I0516 23:03:35.309654    7216 delete.go:82] Unable to get host status for calico-20220516225309-2444, assuming it has already been deleted: state: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	W0516 23:03:35.309654    7216 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for calico-20220516225309-2444 container: docker volume create calico-20220516225309-2444 --label name.minikube.sigs.k8s.io=calico-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/calico-20220516225309-2444': mkdir /var/lib/docker/volumes/calico-20220516225309-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for calico-20220516225309-2444 container: docker volume create calico-20220516225309-2444 --label name.minikube.sigs.k8s.io=calico-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/calico-20220516225309-2444': mkdir /var/lib/docker/volumes/calico-20220516225309-2444: read-only file system
	
	I0516 23:03:35.309654    7216 start.go:623] Will try again in 5 seconds ...
	I0516 23:03:40.320989    7216 start.go:352] acquiring machines lock for calico-20220516225309-2444: {Name:mk03b7eb6997909f8bddf03c46482793976e2f58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:03:40.321479    7216 start.go:356] acquired machines lock for "calico-20220516225309-2444" in 268.6µs
	I0516 23:03:40.321479    7216 start.go:94] Skipping create...Using existing machine configuration
	I0516 23:03:40.321479    7216 fix.go:55] fixHost starting: 
	I0516 23:03:40.337894    7216 cli_runner.go:164] Run: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:41.374054    7216 cli_runner.go:211] docker container inspect calico-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:41.374054    7216 cli_runner.go:217] Completed: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: (1.0359368s)
	I0516 23:03:41.374054    7216 fix.go:103] recreateIfNeeded on calico-20220516225309-2444: state= err=unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:41.374054    7216 fix.go:108] machineExists: false. err=machine does not exist
	I0516 23:03:41.377494    7216 out.go:177] * docker "calico-20220516225309-2444" container is missing, will recreate.
	I0516 23:03:41.380841    7216 delete.go:124] DEMOLISHING calico-20220516225309-2444 ...
	I0516 23:03:41.396140    7216 cli_runner.go:164] Run: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:42.463927    7216 cli_runner.go:211] docker container inspect calico-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:42.464010    7216 cli_runner.go:217] Completed: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: (1.0675751s)
	W0516 23:03:42.464061    7216 stop.go:75] unable to get state: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:42.464105    7216 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:42.482095    7216 cli_runner.go:164] Run: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:43.568554    7216 cli_runner.go:211] docker container inspect calico-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:43.568554    7216 cli_runner.go:217] Completed: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: (1.086389s)
	I0516 23:03:43.568554    7216 delete.go:82] Unable to get host status for calico-20220516225309-2444, assuming it has already been deleted: state: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:43.578166    7216 cli_runner.go:164] Run: docker container inspect -f {{.Id}} calico-20220516225309-2444
	W0516 23:03:44.674050    7216 cli_runner.go:211] docker container inspect -f {{.Id}} calico-20220516225309-2444 returned with exit code 1
	I0516 23:03:44.674131    7216 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} calico-20220516225309-2444: (1.0958546s)
	I0516 23:03:44.674131    7216 kic.go:356] could not find the container calico-20220516225309-2444 to remove it. will try anyways
	I0516 23:03:44.683713    7216 cli_runner.go:164] Run: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:45.806310    7216 cli_runner.go:211] docker container inspect calico-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:45.806581    7216 cli_runner.go:217] Completed: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: (1.1225878s)
	W0516 23:03:45.806638    7216 oci.go:84] error getting container status, will try to delete anyways: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:45.816340    7216 cli_runner.go:164] Run: docker exec --privileged -t calico-20220516225309-2444 /bin/bash -c "sudo init 0"
	W0516 23:03:46.909862    7216 cli_runner.go:211] docker exec --privileged -t calico-20220516225309-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 23:03:46.909986    7216 cli_runner.go:217] Completed: docker exec --privileged -t calico-20220516225309-2444 /bin/bash -c "sudo init 0": (1.0935129s)
	I0516 23:03:46.909986    7216 oci.go:641] error shutdown calico-20220516225309-2444: docker exec --privileged -t calico-20220516225309-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:47.935465    7216 cli_runner.go:164] Run: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:49.027168    7216 cli_runner.go:211] docker container inspect calico-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:49.027168    7216 cli_runner.go:217] Completed: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: (1.0916935s)
	I0516 23:03:49.027168    7216 oci.go:653] temporary error verifying shutdown: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:49.027168    7216 oci.go:655] temporary error: container calico-20220516225309-2444 status is  but expect it to be exited
	I0516 23:03:49.027168    7216 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:49.505006    7216 cli_runner.go:164] Run: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:50.626242    7216 cli_runner.go:211] docker container inspect calico-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:50.626292    7216 cli_runner.go:217] Completed: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: (1.1211371s)
	I0516 23:03:50.626449    7216 oci.go:653] temporary error verifying shutdown: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:50.626498    7216 oci.go:655] temporary error: container calico-20220516225309-2444 status is  but expect it to be exited
	I0516 23:03:50.626536    7216 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:51.528566    7216 cli_runner.go:164] Run: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:52.639963    7216 cli_runner.go:211] docker container inspect calico-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:52.639963    7216 cli_runner.go:217] Completed: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: (1.1113871s)
	I0516 23:03:52.639963    7216 oci.go:653] temporary error verifying shutdown: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:52.639963    7216 oci.go:655] temporary error: container calico-20220516225309-2444 status is  but expect it to be exited
	I0516 23:03:52.639963    7216 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:53.294171    7216 cli_runner.go:164] Run: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:54.396742    7216 cli_runner.go:211] docker container inspect calico-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:54.396969    7216 cli_runner.go:217] Completed: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: (1.1025614s)
	I0516 23:03:54.397076    7216 oci.go:653] temporary error verifying shutdown: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:54.397133    7216 oci.go:655] temporary error: container calico-20220516225309-2444 status is  but expect it to be exited
	I0516 23:03:54.397133    7216 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:55.527779    7216 cli_runner.go:164] Run: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:56.629049    7216 cli_runner.go:211] docker container inspect calico-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:56.629049    7216 cli_runner.go:217] Completed: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: (1.1012603s)
	I0516 23:03:56.629049    7216 oci.go:653] temporary error verifying shutdown: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:56.629049    7216 oci.go:655] temporary error: container calico-20220516225309-2444 status is  but expect it to be exited
	I0516 23:03:56.629049    7216 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:58.160568    7216 cli_runner.go:164] Run: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}
	W0516 23:03:59.264192    7216 cli_runner.go:211] docker container inspect calico-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:59.264192    7216 cli_runner.go:217] Completed: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: (1.1035668s)
	I0516 23:03:59.264311    7216 oci.go:653] temporary error verifying shutdown: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:03:59.264311    7216 oci.go:655] temporary error: container calico-20220516225309-2444 status is  but expect it to be exited
	I0516 23:03:59.264366    7216 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:04:02.325578    7216 cli_runner.go:164] Run: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}
	W0516 23:04:03.420277    7216 cli_runner.go:211] docker container inspect calico-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:03.420330    7216 cli_runner.go:217] Completed: docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: (1.0946903s)
	I0516 23:04:03.420433    7216 oci.go:653] temporary error verifying shutdown: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:04:03.420433    7216 oci.go:655] temporary error: container calico-20220516225309-2444 status is  but expect it to be exited
	I0516 23:04:03.420529    7216 oci.go:88] couldn't shut down calico-20220516225309-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "calico-20220516225309-2444": docker container inspect calico-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	 
	I0516 23:04:03.429518    7216 cli_runner.go:164] Run: docker rm -f -v calico-20220516225309-2444
	I0516 23:04:04.566056    7216 cli_runner.go:217] Completed: docker rm -f -v calico-20220516225309-2444: (1.1365275s)
	I0516 23:04:04.574050    7216 cli_runner.go:164] Run: docker container inspect -f {{.Id}} calico-20220516225309-2444
	W0516 23:04:05.722288    7216 cli_runner.go:211] docker container inspect -f {{.Id}} calico-20220516225309-2444 returned with exit code 1
	I0516 23:04:05.722358    7216 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} calico-20220516225309-2444: (1.1480554s)
	I0516 23:04:05.729842    7216 cli_runner.go:164] Run: docker network inspect calico-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:04:06.860451    7216 cli_runner.go:211] docker network inspect calico-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:04:06.860451    7216 cli_runner.go:217] Completed: docker network inspect calico-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1305996s)
	I0516 23:04:06.867549    7216 network_create.go:272] running [docker network inspect calico-20220516225309-2444] to gather additional debugging logs...
	I0516 23:04:06.867549    7216 cli_runner.go:164] Run: docker network inspect calico-20220516225309-2444
	W0516 23:04:07.980866    7216 cli_runner.go:211] docker network inspect calico-20220516225309-2444 returned with exit code 1
	I0516 23:04:07.980866    7216 cli_runner.go:217] Completed: docker network inspect calico-20220516225309-2444: (1.113307s)
	I0516 23:04:07.980866    7216 network_create.go:275] error running [docker network inspect calico-20220516225309-2444]: docker network inspect calico-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220516225309-2444
	I0516 23:04:07.980866    7216 network_create.go:277] output of [docker network inspect calico-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220516225309-2444
	
	** /stderr **
	W0516 23:04:07.981858    7216 delete.go:139] delete failed (probably ok) <nil>
	I0516 23:04:07.981858    7216 fix.go:115] Sleeping 1 second for extra luck!
	I0516 23:04:08.989493    7216 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:04:08.992681    7216 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:04:08.992681    7216 start.go:165] libmachine.API.Create for "calico-20220516225309-2444" (driver="docker")
	I0516 23:04:08.992681    7216 client.go:168] LocalClient.Create starting
	I0516 23:04:08.993539    7216 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:04:08.993735    7216 main.go:134] libmachine: Decoding PEM data...
	I0516 23:04:08.993838    7216 main.go:134] libmachine: Parsing certificate...
	I0516 23:04:08.994031    7216 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:04:08.994254    7216 main.go:134] libmachine: Decoding PEM data...
	I0516 23:04:08.994311    7216 main.go:134] libmachine: Parsing certificate...
	I0516 23:04:09.004764    7216 cli_runner.go:164] Run: docker network inspect calico-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:04:10.118462    7216 cli_runner.go:211] docker network inspect calico-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:04:10.118508    7216 cli_runner.go:217] Completed: docker network inspect calico-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1136348s)
	I0516 23:04:10.127228    7216 network_create.go:272] running [docker network inspect calico-20220516225309-2444] to gather additional debugging logs...
	I0516 23:04:10.127228    7216 cli_runner.go:164] Run: docker network inspect calico-20220516225309-2444
	W0516 23:04:11.212915    7216 cli_runner.go:211] docker network inspect calico-20220516225309-2444 returned with exit code 1
	I0516 23:04:11.213245    7216 cli_runner.go:217] Completed: docker network inspect calico-20220516225309-2444: (1.0856305s)
	I0516 23:04:11.213245    7216 network_create.go:275] error running [docker network inspect calico-20220516225309-2444]: docker network inspect calico-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220516225309-2444
	I0516 23:04:11.213245    7216 network_create.go:277] output of [docker network inspect calico-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220516225309-2444
	
	** /stderr **
	I0516 23:04:11.228142    7216 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:04:12.305578    7216 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0768181s)
	I0516 23:04:12.324805    7216 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058e2a0] amended:true}} dirty:map[192.168.49.0:0xc00058e2a0 192.168.58.0:0xc000006950 192.168.67.0:0xc000006a00 192.168.76.0:0xc000006a98] misses:2}
	I0516 23:04:12.324805    7216 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:12.342899    7216 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058e2a0] amended:true}} dirty:map[192.168.49.0:0xc00058e2a0 192.168.58.0:0xc000006950 192.168.67.0:0xc000006a00 192.168.76.0:0xc000006a98] misses:3}
	I0516 23:04:12.343433    7216 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:12.358492    7216 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058e2a0 192.168.58.0:0xc000006950 192.168.67.0:0xc000006a00 192.168.76.0:0xc000006a98] amended:false}} dirty:map[] misses:0}
	I0516 23:04:12.358492    7216 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:12.375416    7216 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058e2a0 192.168.58.0:0xc000006950 192.168.67.0:0xc000006a00 192.168.76.0:0xc000006a98] amended:false}} dirty:map[] misses:0}
	I0516 23:04:12.375416    7216 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:12.394983    7216 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058e2a0 192.168.58.0:0xc000006950 192.168.67.0:0xc000006a00 192.168.76.0:0xc000006a98] amended:true}} dirty:map[192.168.49.0:0xc00058e2a0 192.168.58.0:0xc000006950 192.168.67.0:0xc000006a00 192.168.76.0:0xc000006a98 192.168.85.0:0xc000142338] misses:0}
	I0516 23:04:12.394983    7216 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:12.394983    7216 network_create.go:115] attempt to create docker network calico-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 23:04:12.402579    7216 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444
	W0516 23:04:13.518769    7216 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444 returned with exit code 1
	I0516 23:04:13.518922    7216 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444: (1.1161799s)
	E0516 23:04:13.519012    7216 network_create.go:104] error while trying to create docker network calico-20220516225309-2444 192.168.85.0/24: create docker network calico-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ffd7d581657f3b2f22c1bd79e7031ed2cff246bebde6102db5e979c21a8f8aba (br-ffd7d581657f): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 23:04:13.519242    7216 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network calico-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ffd7d581657f3b2f22c1bd79e7031ed2cff246bebde6102db5e979c21a8f8aba (br-ffd7d581657f): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network calico-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ffd7d581657f3b2f22c1bd79e7031ed2cff246bebde6102db5e979c21a8f8aba (br-ffd7d581657f): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 23:04:13.534152    7216 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:04:14.595845    7216 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0607644s)
	I0516 23:04:14.604477    7216 cli_runner.go:164] Run: docker volume create calico-20220516225309-2444 --label name.minikube.sigs.k8s.io=calico-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:04:15.687497    7216 cli_runner.go:211] docker volume create calico-20220516225309-2444 --label name.minikube.sigs.k8s.io=calico-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:04:15.687637    7216 cli_runner.go:217] Completed: docker volume create calico-20220516225309-2444 --label name.minikube.sigs.k8s.io=calico-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0830101s)
	I0516 23:04:15.687713    7216 client.go:171] LocalClient.Create took 6.6949739s
	I0516 23:04:17.701105    7216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:04:17.709298    7216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444
	W0516 23:04:18.829234    7216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444 returned with exit code 1
	I0516 23:04:18.829234    7216 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: (1.1197978s)
	I0516 23:04:18.829234    7216 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:04:19.172457    7216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444
	W0516 23:04:20.259318    7216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444 returned with exit code 1
	I0516 23:04:20.259318    7216 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: (1.0868513s)
	W0516 23:04:20.259318    7216 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	
	W0516 23:04:20.259318    7216 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:04:20.269318    7216 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:04:20.276320    7216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444
	W0516 23:04:21.365645    7216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444 returned with exit code 1
	I0516 23:04:21.365645    7216 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: (1.0893156s)
	I0516 23:04:21.365645    7216 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:04:21.596857    7216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444
	W0516 23:04:22.686796    7216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444 returned with exit code 1
	I0516 23:04:22.686796    7216 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: (1.0899294s)
	W0516 23:04:22.686796    7216 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	
	W0516 23:04:22.686796    7216 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:04:22.686796    7216 start.go:134] duration metric: createHost completed in 13.6970833s
	I0516 23:04:22.696796    7216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:04:22.703799    7216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444
	W0516 23:04:23.799813    7216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444 returned with exit code 1
	I0516 23:04:23.799813    7216 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: (1.096005s)
	I0516 23:04:23.799813    7216 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:04:24.059700    7216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444
	W0516 23:04:25.124518    7216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444 returned with exit code 1
	I0516 23:04:25.124518    7216 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: (1.0648095s)
	W0516 23:04:25.124518    7216 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	
	W0516 23:04:25.124518    7216 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:04:25.134649    7216 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:04:25.143105    7216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444
	W0516 23:04:26.293927    7216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444 returned with exit code 1
	I0516 23:04:26.293975    7216 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: (1.1506391s)
	I0516 23:04:26.294266    7216 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:04:26.506339    7216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444
	W0516 23:04:27.597646    7216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444 returned with exit code 1
	I0516 23:04:27.597646    7216 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: (1.0912969s)
	W0516 23:04:27.597646    7216 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	
	W0516 23:04:27.597646    7216 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220516225309-2444
	I0516 23:04:27.597646    7216 fix.go:57] fixHost completed within 47.2757578s
	I0516 23:04:27.597646    7216 start.go:81] releasing machines lock for "calico-20220516225309-2444", held for 47.2757578s
	W0516 23:04:27.598552    7216 out.go:239] * Failed to start docker container. Running "minikube delete -p calico-20220516225309-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for calico-20220516225309-2444 container: docker volume create calico-20220516225309-2444 --label name.minikube.sigs.k8s.io=calico-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/calico-20220516225309-2444': mkdir /var/lib/docker/volumes/calico-20220516225309-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p calico-20220516225309-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for calico-20220516225309-2444 container: docker volume create calico-20220516225309-2444 --label name.minikube.sigs.k8s.io=calico-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/calico-20220516225309-2444': mkdir /var/lib/docker/volumes/calico-20220516225309-2444: read-only file system
	
	I0516 23:04:27.603039    7216 out.go:177] 
	W0516 23:04:27.605036    7216 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for calico-20220516225309-2444 container: docker volume create calico-20220516225309-2444 --label name.minikube.sigs.k8s.io=calico-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/calico-20220516225309-2444': mkdir /var/lib/docker/volumes/calico-20220516225309-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for calico-20220516225309-2444 container: docker volume create calico-20220516225309-2444 --label name.minikube.sigs.k8s.io=calico-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/calico-20220516225309-2444': mkdir /var/lib/docker/volumes/calico-20220516225309-2444: read-only file system
	
	W0516 23:04:27.605036    7216 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 23:04:27.605726    7216 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 23:04:27.608918    7216 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/calico/Start (81.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (122.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220516230100-2444 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-20220516230100-2444 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m58.4513711s)

                                                
                                                
-- stdout --
	* [newest-cni-20220516230100-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node newest-cni-20220516230100-2444 in cluster newest-cni-20220516230100-2444
	* Pulling base image ...
	* docker "newest-cni-20220516230100-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "newest-cni-20220516230100-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:03:07.477437    6776 out.go:296] Setting OutFile to fd 1712 ...
	I0516 23:03:07.539387    6776 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:03:07.539387    6776 out.go:309] Setting ErrFile to fd 1800...
	I0516 23:03:07.539387    6776 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:03:07.552561    6776 out.go:303] Setting JSON to false
	I0516 23:03:07.555031    6776 start.go:115] hostinfo: {"hostname":"minikube2","uptime":5299,"bootTime":1652736888,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 23:03:07.555163    6776 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 23:03:07.556693    6776 out.go:177] * [newest-cni-20220516230100-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 23:03:07.556693    6776 notify.go:193] Checking for updates...
	I0516 23:03:07.564284    6776 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 23:03:07.566202    6776 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 23:03:07.568879    6776 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 23:03:07.571071    6776 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 23:03:07.573892    6776 config.go:178] Loaded profile config "newest-cni-20220516230100-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:03:07.575005    6776 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 23:03:10.304090    6776 docker.go:137] docker version: linux-20.10.14
	I0516 23:03:10.313427    6776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:03:12.436377    6776 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1229321s)
	I0516 23:03:12.436377    6776 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:03:11.3703074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:03:12.443394    6776 out.go:177] * Using the docker driver based on existing profile
	I0516 23:03:12.445429    6776 start.go:284] selected driver: docker
	I0516 23:03:12.445429    6776 start.go:806] validating driver "docker" against &{Name:newest-cni-20220516230100-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220516230100-2444 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:f
alse system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 23:03:12.445429    6776 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 23:03:12.508389    6776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:03:14.664215    6776 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.155808s)
	I0516 23:03:14.664215    6776 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-05-16 23:03:13.5699035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:03:14.664215    6776 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0516 23:03:14.664215    6776 cni.go:95] Creating CNI manager for ""
	I0516 23:03:14.664215    6776 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 23:03:14.664215    6776 start_flags.go:306] config:
	{Name:newest-cni-20220516230100-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220516230100-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 23:03:14.671212    6776 out.go:177] * Starting control plane node newest-cni-20220516230100-2444 in cluster newest-cni-20220516230100-2444
	I0516 23:03:14.673229    6776 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 23:03:14.675221    6776 out.go:177] * Pulling base image ...
	I0516 23:03:14.677221    6776 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 23:03:14.677221    6776 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 23:03:14.678219    6776 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 23:03:14.678219    6776 cache.go:57] Caching tarball of preloaded images
	I0516 23:03:14.678219    6776 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 23:03:14.678219    6776 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 23:03:14.678219    6776 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\newest-cni-20220516230100-2444\config.json ...
	I0516 23:03:15.775034    6776 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 23:03:15.775086    6776 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:03:15.775086    6776 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:03:15.775086    6776 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 23:03:15.775086    6776 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 23:03:15.775086    6776 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 23:03:15.775652    6776 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 23:03:15.775652    6776 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 23:03:15.775652    6776 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:03:18.124422    6776 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 23:03:18.124467    6776 cache.go:206] Successfully downloaded all kic artifacts
	I0516 23:03:18.124467    6776 start.go:352] acquiring machines lock for newest-cni-20220516230100-2444: {Name:mk1391c96b8bd2d1f34dcc3d7a2394a9d5104457 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:03:18.124467    6776 start.go:356] acquired machines lock for "newest-cni-20220516230100-2444" in 0s
	I0516 23:03:18.124467    6776 start.go:94] Skipping create...Using existing machine configuration
	I0516 23:03:18.124467    6776 fix.go:55] fixHost starting: 
	I0516 23:03:18.143722    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:03:19.210299    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:19.210299    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0664952s)
	I0516 23:03:19.210299    6776 fix.go:103] recreateIfNeeded on newest-cni-20220516230100-2444: state= err=unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:19.210299    6776 fix.go:108] machineExists: false. err=machine does not exist
	I0516 23:03:19.213819    6776 out.go:177] * docker "newest-cni-20220516230100-2444" container is missing, will recreate.
	I0516 23:03:19.217488    6776 delete.go:124] DEMOLISHING newest-cni-20220516230100-2444 ...
	I0516 23:03:19.237590    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:03:20.301785    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:20.301785    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0641857s)
	W0516 23:03:20.301785    6776 stop.go:75] unable to get state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:20.301785    6776 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:20.317743    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:03:21.375213    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:21.375213    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0573508s)
	I0516 23:03:21.375213    6776 delete.go:82] Unable to get host status for newest-cni-20220516230100-2444, assuming it has already been deleted: state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:21.388292    6776 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220516230100-2444
	W0516 23:03:22.543826    6776 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:03:22.543826    6776 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220516230100-2444: (1.1553802s)
	I0516 23:03:22.549225    6776 kic.go:356] could not find the container newest-cni-20220516230100-2444 to remove it. will try anyways
	I0516 23:03:22.558540    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:03:23.654829    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:23.654829    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.09628s)
	W0516 23:03:23.654829    6776 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:23.662826    6776 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-20220516230100-2444 /bin/bash -c "sudo init 0"
	W0516 23:03:24.794033    6776 cli_runner.go:211] docker exec --privileged -t newest-cni-20220516230100-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 23:03:24.794033    6776 cli_runner.go:217] Completed: docker exec --privileged -t newest-cni-20220516230100-2444 /bin/bash -c "sudo init 0": (1.131197s)
	I0516 23:03:24.794033    6776 oci.go:641] error shutdown newest-cni-20220516230100-2444: docker exec --privileged -t newest-cni-20220516230100-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:25.812388    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:03:26.932699    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:26.932699    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.1201548s)
	I0516 23:03:26.932699    6776 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:26.932699    6776 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:03:26.932699    6776 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:27.495232    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:03:28.578320    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:28.578594    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0830365s)
	I0516 23:03:28.578698    6776 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:28.578740    6776 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:03:28.578740    6776 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:29.679148    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:03:30.815586    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:30.815660    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.1362927s)
	I0516 23:03:30.815660    6776 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:30.815660    6776 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:03:30.815660    6776 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:32.136055    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:03:33.238707    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:33.238772    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.1025684s)
	I0516 23:03:33.238890    6776 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:33.238950    6776 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:03:33.239012    6776 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:34.831308    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:03:35.862799    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:35.862929    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0307923s)
	I0516 23:03:35.863010    6776 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:35.863050    6776 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:03:35.863065    6776 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:38.228461    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:03:39.284053    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:39.284053    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0555825s)
	I0516 23:03:39.284053    6776 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:39.284053    6776 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:03:39.284053    6776 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:43.809231    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:03:44.921725    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:03:44.921762    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.1124318s)
	I0516 23:03:44.921762    6776 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:03:44.921762    6776 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:03:44.921762    6776 oci.go:88] couldn't shut down newest-cni-20220516230100-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	 
	I0516 23:03:44.931476    6776 cli_runner.go:164] Run: docker rm -f -v newest-cni-20220516230100-2444
	I0516 23:03:46.007535    6776 cli_runner.go:217] Completed: docker rm -f -v newest-cni-20220516230100-2444: (1.0760495s)
	I0516 23:03:46.014334    6776 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220516230100-2444
	W0516 23:03:47.096887    6776 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:03:47.096887    6776 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220516230100-2444: (1.0825435s)
	I0516 23:03:47.104877    6776 cli_runner.go:164] Run: docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:03:48.140505    6776 cli_runner.go:211] docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:03:48.140603    6776 cli_runner.go:217] Completed: docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0355713s)
	I0516 23:03:48.154301    6776 network_create.go:272] running [docker network inspect newest-cni-20220516230100-2444] to gather additional debugging logs...
	I0516 23:03:48.154301    6776 cli_runner.go:164] Run: docker network inspect newest-cni-20220516230100-2444
	W0516 23:03:49.226262    6776 cli_runner.go:211] docker network inspect newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:03:49.226311    6776 cli_runner.go:217] Completed: docker network inspect newest-cni-20220516230100-2444: (1.071794s)
	I0516 23:03:49.226373    6776 network_create.go:275] error running [docker network inspect newest-cni-20220516230100-2444]: docker network inspect newest-cni-20220516230100-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220516230100-2444
	I0516 23:03:49.226482    6776 network_create.go:277] output of [docker network inspect newest-cni-20220516230100-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220516230100-2444
	
	** /stderr **
	W0516 23:03:49.228107    6776 delete.go:139] delete failed (probably ok) <nil>
	I0516 23:03:49.228207    6776 fix.go:115] Sleeping 1 second for extra luck!
	I0516 23:03:50.237018    6776 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:03:50.241057    6776 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 23:03:50.242034    6776 start.go:165] libmachine.API.Create for "newest-cni-20220516230100-2444" (driver="docker")
	I0516 23:03:50.242034    6776 client.go:168] LocalClient.Create starting
	I0516 23:03:50.242034    6776 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:03:50.242034    6776 main.go:134] libmachine: Decoding PEM data...
	I0516 23:03:50.243025    6776 main.go:134] libmachine: Parsing certificate...
	I0516 23:03:50.243025    6776 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:03:50.243025    6776 main.go:134] libmachine: Decoding PEM data...
	I0516 23:03:50.243025    6776 main.go:134] libmachine: Parsing certificate...
	I0516 23:03:50.251989    6776 cli_runner.go:164] Run: docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:03:51.380679    6776 cli_runner.go:211] docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:03:51.380756    6776 cli_runner.go:217] Completed: docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1286256s)
	I0516 23:03:51.392149    6776 network_create.go:272] running [docker network inspect newest-cni-20220516230100-2444] to gather additional debugging logs...
	I0516 23:03:51.392149    6776 cli_runner.go:164] Run: docker network inspect newest-cni-20220516230100-2444
	W0516 23:03:52.531932    6776 cli_runner.go:211] docker network inspect newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:03:52.531987    6776 cli_runner.go:217] Completed: docker network inspect newest-cni-20220516230100-2444: (1.1397727s)
	I0516 23:03:52.532028    6776 network_create.go:275] error running [docker network inspect newest-cni-20220516230100-2444]: docker network inspect newest-cni-20220516230100-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220516230100-2444
	I0516 23:03:52.532177    6776 network_create.go:277] output of [docker network inspect newest-cni-20220516230100-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220516230100-2444
	
	** /stderr **
	I0516 23:03:52.542706    6776 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:03:53.673231    6776 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1305155s)
	I0516 23:03:53.705263    6776 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00011a2b0] misses:0}
	I0516 23:03:53.705263    6776 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:53.705263    6776 network_create.go:115] attempt to create docker network newest-cni-20220516230100-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 23:03:53.712516    6776 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444
	W0516 23:03:54.835510    6776 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:03:54.835510    6776 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: (1.1229846s)
	W0516 23:03:54.835510    6776 network_create.go:107] failed to create docker network newest-cni-20220516230100-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 23:03:54.851816    6776 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011a2b0] amended:false}} dirty:map[] misses:0}
	I0516 23:03:54.851816    6776 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:54.868831    6776 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011a2b0] amended:true}} dirty:map[192.168.49.0:0xc00011a2b0 192.168.58.0:0xc0001762a8] misses:0}
	I0516 23:03:54.868831    6776 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:54.868831    6776 network_create.go:115] attempt to create docker network newest-cni-20220516230100-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 23:03:54.875845    6776 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444
	W0516 23:03:56.017516    6776 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:03:56.017516    6776 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: (1.1416607s)
	W0516 23:03:56.017516    6776 network_create.go:107] failed to create docker network newest-cni-20220516230100-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 23:03:56.034801    6776 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011a2b0] amended:true}} dirty:map[192.168.49.0:0xc00011a2b0 192.168.58.0:0xc0001762a8] misses:1}
	I0516 23:03:56.034801    6776 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:56.054947    6776 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011a2b0] amended:true}} dirty:map[192.168.49.0:0xc00011a2b0 192.168.58.0:0xc0001762a8 192.168.67.0:0xc00076a488] misses:1}
	I0516 23:03:56.054947    6776 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:56.054947    6776 network_create.go:115] attempt to create docker network newest-cni-20220516230100-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 23:03:56.065794    6776 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444
	W0516 23:03:57.152110    6776 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:03:57.152248    6776 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: (1.0863072s)
	W0516 23:03:57.152286    6776 network_create.go:107] failed to create docker network newest-cni-20220516230100-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 23:03:57.170949    6776 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011a2b0] amended:true}} dirty:map[192.168.49.0:0xc00011a2b0 192.168.58.0:0xc0001762a8 192.168.67.0:0xc00076a488] misses:2}
	I0516 23:03:57.170949    6776 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:57.186427    6776 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011a2b0] amended:true}} dirty:map[192.168.49.0:0xc00011a2b0 192.168.58.0:0xc0001762a8 192.168.67.0:0xc00076a488 192.168.76.0:0xc0001763b0] misses:2}
	I0516 23:03:57.186427    6776 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:03:57.186427    6776 network_create.go:115] attempt to create docker network newest-cni-20220516230100-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 23:03:57.194426    6776 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444
	W0516 23:03:58.270874    6776 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:03:58.270874    6776 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: (1.0762425s)
	E0516 23:03:58.270874    6776 network_create.go:104] error while trying to create docker network newest-cni-20220516230100-2444 192.168.76.0/24: create docker network newest-cni-20220516230100-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2a09f97593341961abc989a39e956490cf6185ba5318546ce74e4693c0811538 (br-2a09f9759334): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 23:03:58.270874    6776 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220516230100-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2a09f97593341961abc989a39e956490cf6185ba5318546ce74e4693c0811538 (br-2a09f9759334): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220516230100-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2a09f97593341961abc989a39e956490cf6185ba5318546ce74e4693c0811538 (br-2a09f9759334): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 23:03:58.289251    6776 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:03:59.387147    6776 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0978859s)
	I0516 23:03:59.397529    6776 cli_runner.go:164] Run: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:04:00.446688    6776 cli_runner.go:211] docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:04:00.446862    6776 cli_runner.go:217] Completed: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0491495s)
	I0516 23:04:00.446946    6776 client.go:171] LocalClient.Create took 10.204824s
	I0516 23:04:02.474759    6776 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:04:02.487998    6776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:04:03.590520    6776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:04:03.590520    6776 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1025133s)
	I0516 23:04:03.590520    6776 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:03.774100    6776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:04:04.912629    6776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:04:04.912790    6776 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1384544s)
	W0516 23:04:04.913011    6776 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	W0516 23:04:04.913044    6776 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:04.928004    6776 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:04:04.936595    6776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:04:06.061292    6776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:04:06.061342    6776 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1240112s)
	I0516 23:04:06.061520    6776 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:06.274872    6776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:04:07.364081    6776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:04:07.364081    6776 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.0891992s)
	W0516 23:04:07.364081    6776 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	W0516 23:04:07.364081    6776 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:07.364081    6776 start.go:134] duration metric: createHost completed in 17.1269146s
	I0516 23:04:07.378772    6776 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:04:07.386344    6776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:04:08.562082    6776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:04:08.562082    6776 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1757282s)
	I0516 23:04:08.562082    6776 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:08.907025    6776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:04:10.025897    6776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:04:10.025944    6776 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1186949s)
	W0516 23:04:10.025944    6776 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	W0516 23:04:10.025944    6776 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:10.036370    6776 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:04:10.043374    6776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:04:11.164993    6776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:04:11.164993    6776 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1216094s)
	I0516 23:04:11.164993    6776 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:11.396750    6776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:04:12.554582    6776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:04:12.554582    6776 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1578224s)
	W0516 23:04:12.554582    6776 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	W0516 23:04:12.554582    6776 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:12.554582    6776 fix.go:57] fixHost completed within 54.4296457s
	I0516 23:04:12.554582    6776 start.go:81] releasing machines lock for "newest-cni-20220516230100-2444", held for 54.4296457s
	W0516 23:04:12.554582    6776 start.go:608] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220516230100-2444 container: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220516230100-2444: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220516230100-2444': mkdir /var/lib/docker/volumes/newest-cni-20220516230100-2444: read-only file system
	W0516 23:04:12.554582    6776 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220516230100-2444 container: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220516230100-2444: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220516230100-2444': mkdir /var/lib/docker/volumes/newest-cni-20220516230100-2444: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220516230100-2444 container: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220516230100-2444: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220516230100-2444': mkdir /var/lib/docker/volumes/newest-cni-20220516230100-2444: read-only file system
	
	I0516 23:04:12.554582    6776 start.go:623] Will try again in 5 seconds ...
	I0516 23:04:17.566325    6776 start.go:352] acquiring machines lock for newest-cni-20220516230100-2444: {Name:mk1391c96b8bd2d1f34dcc3d7a2394a9d5104457 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:04:17.566325    6776 start.go:356] acquired machines lock for "newest-cni-20220516230100-2444" in 0s
	I0516 23:04:17.566325    6776 start.go:94] Skipping create...Using existing machine configuration
	I0516 23:04:17.566325    6776 fix.go:55] fixHost starting: 
	I0516 23:04:17.581734    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:04:18.673306    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:18.673306    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.091271s)
	I0516 23:04:18.673306    6776 fix.go:103] recreateIfNeeded on newest-cni-20220516230100-2444: state= err=unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:18.673306    6776 fix.go:108] machineExists: false. err=machine does not exist
	I0516 23:04:18.676066    6776 out.go:177] * docker "newest-cni-20220516230100-2444" container is missing, will recreate.
	I0516 23:04:18.679198    6776 delete.go:124] DEMOLISHING newest-cni-20220516230100-2444 ...
	I0516 23:04:18.694092    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:04:19.815966    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:19.815966    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.1217331s)
	W0516 23:04:19.815966    6776 stop.go:75] unable to get state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:19.815966    6776 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:19.835306    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:04:20.939675    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:20.939675    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.1043592s)
	I0516 23:04:20.939675    6776 delete.go:82] Unable to get host status for newest-cni-20220516230100-2444, assuming it has already been deleted: state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:20.958662    6776 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220516230100-2444
	W0516 23:04:22.085347    6776 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:04:22.085415    6776 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220516230100-2444: (1.1265962s)
	I0516 23:04:22.085415    6776 kic.go:356] could not find the container newest-cni-20220516230100-2444 to remove it. will try anyways
	I0516 23:04:22.094076    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:04:23.195599    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:23.195599    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.1015134s)
	W0516 23:04:23.195599    6776 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:23.205762    6776 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-20220516230100-2444 /bin/bash -c "sudo init 0"
	W0516 23:04:24.285250    6776 cli_runner.go:211] docker exec --privileged -t newest-cni-20220516230100-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 23:04:24.285250    6776 cli_runner.go:217] Completed: docker exec --privileged -t newest-cni-20220516230100-2444 /bin/bash -c "sudo init 0": (1.079479s)
	I0516 23:04:24.285250    6776 oci.go:641] error shutdown newest-cni-20220516230100-2444: docker exec --privileged -t newest-cni-20220516230100-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:25.303910    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:04:26.466669    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:26.466669    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.1627491s)
	I0516 23:04:26.466669    6776 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:26.466669    6776 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:04:26.466669    6776 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:26.975479    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:04:28.066785    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:28.066785    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0912962s)
	I0516 23:04:28.066785    6776 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:28.066785    6776 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:04:28.066785    6776 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:28.662122    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:04:29.730471    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:29.730471    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0681818s)
	I0516 23:04:29.730534    6776 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:29.730534    6776 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:04:29.730534    6776 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:30.638666    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:04:31.696682    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:31.696682    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0580065s)
	I0516 23:04:31.696682    6776 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:31.696682    6776 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:04:31.696682    6776 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:33.704182    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:04:34.774428    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:34.774428    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0702361s)
	I0516 23:04:34.774428    6776 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:34.774428    6776 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:04:34.774428    6776 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:36.617901    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:04:37.699190    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:37.699190    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.081279s)
	I0516 23:04:37.699190    6776 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:37.699190    6776 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:04:37.699190    6776 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:40.386790    6776 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:04:41.465055    6776 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:41.465118    6776 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (1.0775012s)
	I0516 23:04:41.465118    6776 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:41.465118    6776 oci.go:655] temporary error: container newest-cni-20220516230100-2444 status is  but expect it to be exited
	I0516 23:04:41.465118    6776 oci.go:88] couldn't shut down newest-cni-20220516230100-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	 
	I0516 23:04:41.474889    6776 cli_runner.go:164] Run: docker rm -f -v newest-cni-20220516230100-2444
	I0516 23:04:42.581137    6776 cli_runner.go:217] Completed: docker rm -f -v newest-cni-20220516230100-2444: (1.1062378s)
	I0516 23:04:42.591063    6776 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220516230100-2444
	W0516 23:04:43.661831    6776 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:04:43.661831    6776 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220516230100-2444: (1.0707226s)
	I0516 23:04:43.671893    6776 cli_runner.go:164] Run: docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:04:44.769778    6776 cli_runner.go:211] docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:04:44.769778    6776 cli_runner.go:217] Completed: docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0978759s)
	I0516 23:04:44.777782    6776 network_create.go:272] running [docker network inspect newest-cni-20220516230100-2444] to gather additional debugging logs...
	I0516 23:04:44.777782    6776 cli_runner.go:164] Run: docker network inspect newest-cni-20220516230100-2444
	W0516 23:04:45.884265    6776 cli_runner.go:211] docker network inspect newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:04:45.884265    6776 cli_runner.go:217] Completed: docker network inspect newest-cni-20220516230100-2444: (1.1064738s)
	I0516 23:04:45.884265    6776 network_create.go:275] error running [docker network inspect newest-cni-20220516230100-2444]: docker network inspect newest-cni-20220516230100-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220516230100-2444
	I0516 23:04:45.884265    6776 network_create.go:277] output of [docker network inspect newest-cni-20220516230100-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220516230100-2444
	
	** /stderr **
	W0516 23:04:45.884936    6776 delete.go:139] delete failed (probably ok) <nil>
	I0516 23:04:45.885480    6776 fix.go:115] Sleeping 1 second for extra luck!
	I0516 23:04:46.891831    6776 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:04:46.899235    6776 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0516 23:04:46.899787    6776 start.go:165] libmachine.API.Create for "newest-cni-20220516230100-2444" (driver="docker")
	I0516 23:04:46.899851    6776 client.go:168] LocalClient.Create starting
	I0516 23:04:46.900121    6776 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:04:46.900674    6776 main.go:134] libmachine: Decoding PEM data...
	I0516 23:04:46.900736    6776 main.go:134] libmachine: Parsing certificate...
	I0516 23:04:46.901006    6776 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:04:46.901211    6776 main.go:134] libmachine: Decoding PEM data...
	I0516 23:04:46.901211    6776 main.go:134] libmachine: Parsing certificate...
	I0516 23:04:46.910130    6776 cli_runner.go:164] Run: docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:04:48.010090    6776 cli_runner.go:211] docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:04:48.010090    6776 cli_runner.go:217] Completed: docker network inspect newest-cni-20220516230100-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0999508s)
	I0516 23:04:48.017126    6776 network_create.go:272] running [docker network inspect newest-cni-20220516230100-2444] to gather additional debugging logs...
	I0516 23:04:48.017126    6776 cli_runner.go:164] Run: docker network inspect newest-cni-20220516230100-2444
	W0516 23:04:49.102848    6776 cli_runner.go:211] docker network inspect newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:04:49.102978    6776 cli_runner.go:217] Completed: docker network inspect newest-cni-20220516230100-2444: (1.0856245s)
	I0516 23:04:49.102978    6776 network_create.go:275] error running [docker network inspect newest-cni-20220516230100-2444]: docker network inspect newest-cni-20220516230100-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220516230100-2444
	I0516 23:04:49.102978    6776 network_create.go:277] output of [docker network inspect newest-cni-20220516230100-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220516230100-2444
	
	** /stderr **
	I0516 23:04:49.112137    6776 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:04:50.203471    6776 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0912308s)
	I0516 23:04:50.220293    6776 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011a2b0] amended:true}} dirty:map[192.168.49.0:0xc00011a2b0 192.168.58.0:0xc0001762a8 192.168.67.0:0xc00076a488 192.168.76.0:0xc0001763b0] misses:2}
	I0516 23:04:50.220293    6776 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:50.235683    6776 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011a2b0] amended:true}} dirty:map[192.168.49.0:0xc00011a2b0 192.168.58.0:0xc0001762a8 192.168.67.0:0xc00076a488 192.168.76.0:0xc0001763b0] misses:3}
	I0516 23:04:50.235683    6776 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:50.249777    6776 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011a2b0 192.168.58.0:0xc0001762a8 192.168.67.0:0xc00076a488 192.168.76.0:0xc0001763b0] amended:false}} dirty:map[] misses:0}
	I0516 23:04:50.249777    6776 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:50.265707    6776 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011a2b0 192.168.58.0:0xc0001762a8 192.168.67.0:0xc00076a488 192.168.76.0:0xc0001763b0] amended:false}} dirty:map[] misses:0}
	I0516 23:04:50.265707    6776 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:50.280273    6776 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00011a2b0 192.168.58.0:0xc0001762a8 192.168.67.0:0xc00076a488 192.168.76.0:0xc0001763b0] amended:true}} dirty:map[192.168.49.0:0xc00011a2b0 192.168.58.0:0xc0001762a8 192.168.67.0:0xc00076a488 192.168.76.0:0xc0001763b0 192.168.85.0:0xc00060c3f0] misses:0}
	I0516 23:04:50.280386    6776 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:50.280435    6776 network_create.go:115] attempt to create docker network newest-cni-20220516230100-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 23:04:50.289213    6776 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444
	W0516 23:04:51.386141    6776 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:04:51.386222    6776 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: (1.096827s)
	E0516 23:04:51.386289    6776 network_create.go:104] error while trying to create docker network newest-cni-20220516230100-2444 192.168.85.0/24: create docker network newest-cni-20220516230100-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7078d0f7c002b9f1ae4872bbd4bb61c365dad64d812459768a4be37fae6f0c0d (br-7078d0f7c002): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 23:04:51.386289    6776 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220516230100-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7078d0f7c002b9f1ae4872bbd4bb61c365dad64d812459768a4be37fae6f0c0d (br-7078d0f7c002): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220516230100-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7078d0f7c002b9f1ae4872bbd4bb61c365dad64d812459768a4be37fae6f0c0d (br-7078d0f7c002): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 23:04:51.400896    6776 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:04:52.450796    6776 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0498906s)
	I0516 23:04:52.460234    6776 cli_runner.go:164] Run: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:04:53.480729    6776 cli_runner.go:211] docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:04:53.480729    6776 cli_runner.go:217] Completed: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0204854s)
	I0516 23:04:53.480729    6776 client.go:171] LocalClient.Create took 6.5808207s
	I0516 23:04:55.497334    6776 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:04:55.504198    6776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:04:56.588818    6776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:04:56.588818    6776 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.0846104s)
	I0516 23:04:56.588818    6776 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:56.867207    6776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:04:57.998278    6776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:04:57.998346    6776 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.131038s)
	W0516 23:04:57.998792    6776 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	W0516 23:04:57.998829    6776 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:58.016881    6776 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:04:58.026136    6776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:04:59.162439    6776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:04:59.162439    6776 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1362422s)
	I0516 23:04:59.162439    6776 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:04:59.381452    6776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:05:00.482238    6776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:05:00.482238    6776 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1007765s)
	W0516 23:05:00.482238    6776 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	W0516 23:05:00.482238    6776 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:05:00.482238    6776 start.go:134] duration metric: createHost completed in 13.5900732s
	I0516 23:05:00.496233    6776 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:05:00.504236    6776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:05:01.616596    6776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:05:01.616596    6776 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1123497s)
	I0516 23:05:01.616596    6776 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:05:01.949419    6776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:05:03.016630    6776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:05:03.016630    6776 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.0672014s)
	W0516 23:05:03.016630    6776 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	W0516 23:05:03.016630    6776 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:05:03.026698    6776 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:05:03.034619    6776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:05:04.173497    6776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:05:04.173592    6776 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.1386862s)
	I0516 23:05:04.173750    6776 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:05:04.539447    6776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444
	W0516 23:05:05.630552    6776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444 returned with exit code 1
	I0516 23:05:05.630552    6776 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: (1.0910315s)
	W0516 23:05:05.630552    6776 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	W0516 23:05:05.630552    6776 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220516230100-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220516230100-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	I0516 23:05:05.630552    6776 fix.go:57] fixHost completed within 48.0638084s
	I0516 23:05:05.630552    6776 start.go:81] releasing machines lock for "newest-cni-20220516230100-2444", held for 48.0638084s
	W0516 23:05:05.630552    6776 out.go:239] * Failed to start docker container. Running "minikube delete -p newest-cni-20220516230100-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220516230100-2444 container: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220516230100-2444: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220516230100-2444': mkdir /var/lib/docker/volumes/newest-cni-20220516230100-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p newest-cni-20220516230100-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220516230100-2444 container: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220516230100-2444: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220516230100-2444': mkdir /var/lib/docker/volumes/newest-cni-20220516230100-2444: read-only file system
	
	I0516 23:05:05.638075    6776 out.go:177] 
	W0516 23:05:05.640368    6776 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220516230100-2444 container: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220516230100-2444: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220516230100-2444': mkdir /var/lib/docker/volumes/newest-cni-20220516230100-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220516230100-2444 container: docker volume create newest-cni-20220516230100-2444 --label name.minikube.sigs.k8s.io=newest-cni-20220516230100-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220516230100-2444: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220516230100-2444': mkdir /var/lib/docker/volumes/newest-cni-20220516230100-2444: read-only file system
	
	W0516 23:05:05.640368    6776 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 23:05:05.640368    6776 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 23:05:05.646037    6776 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p newest-cni-20220516230100-2444 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220516230100-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220516230100-2444: exit status 1 (1.2017935s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220516230100-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220516230100-2444 -n newest-cni-20220516230100-2444

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220516230100-2444 -n newest-cni-20220516230100-2444: exit status 7 (2.9932896s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:05:10.031233    5980 status.go:247] status error: host: state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220516230100-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (122.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (81.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-weave-20220516225309-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p custom-weave-20220516225309-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker: exit status 60 (1m21.8740866s)

                                                
                                                
-- stdout --
	* [custom-weave-20220516225309-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node custom-weave-20220516225309-2444 in cluster custom-weave-20220516225309-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "custom-weave-20220516225309-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:04:20.958662    7516 out.go:296] Setting OutFile to fd 1652 ...
	I0516 23:04:21.019133    7516 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:04:21.019133    7516 out.go:309] Setting ErrFile to fd 2012...
	I0516 23:04:21.019133    7516 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:04:21.031026    7516 out.go:303] Setting JSON to false
	I0516 23:04:21.035322    7516 start.go:115] hostinfo: {"hostname":"minikube2","uptime":5373,"bootTime":1652736888,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 23:04:21.035322    7516 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 23:04:21.040215    7516 out.go:177] * [custom-weave-20220516225309-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 23:04:21.044659    7516 notify.go:193] Checking for updates...
	I0516 23:04:21.047181    7516 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 23:04:21.049113    7516 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 23:04:21.052135    7516 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 23:04:21.054139    7516 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 23:04:21.058123    7516 config.go:178] Loaded profile config "calico-20220516225309-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:04:21.058123    7516 config.go:178] Loaded profile config "default-k8s-different-port-20220516230045-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:04:21.059138    7516 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:04:21.059138    7516 config.go:178] Loaded profile config "newest-cni-20220516230100-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:04:21.059138    7516 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 23:04:23.736163    7516 docker.go:137] docker version: linux-20.10.14
	I0516 23:04:23.744825    7516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:04:25.863329    7516 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1184857s)
	I0516 23:04:25.863971    7516 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:04:24.7852963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:04:25.867770    7516 out.go:177] * Using the docker driver based on user configuration
	I0516 23:04:25.870760    7516 start.go:284] selected driver: docker
	I0516 23:04:25.870760    7516 start.go:806] validating driver "docker" against <nil>
	I0516 23:04:25.870760    7516 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 23:04:25.992079    7516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:04:28.146346    7516 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1540357s)
	I0516 23:04:28.146346    7516 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:04:27.0624564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:04:28.146897    7516 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 23:04:28.147064    7516 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 23:04:28.150210    7516 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 23:04:28.152198    7516 cni.go:95] Creating CNI manager for "testdata\\weavenet.yaml"
	I0516 23:04:28.152198    7516 start_flags.go:301] Found "testdata\\weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0516 23:04:28.152198    7516 start_flags.go:306] config:
	{Name:custom-weave-20220516225309-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:custom-weave-20220516225309-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 23:04:28.156409    7516 out.go:177] * Starting control plane node custom-weave-20220516225309-2444 in cluster custom-weave-20220516225309-2444
	I0516 23:04:28.158457    7516 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 23:04:28.160347    7516 out.go:177] * Pulling base image ...
	I0516 23:04:28.165530    7516 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 23:04:28.165530    7516 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 23:04:28.166258    7516 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 23:04:28.166258    7516 cache.go:57] Caching tarball of preloaded images
	I0516 23:04:28.166258    7516 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 23:04:28.166875    7516 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 23:04:28.166935    7516 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\custom-weave-20220516225309-2444\config.json ...
	I0516 23:04:28.166935    7516 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\custom-weave-20220516225309-2444\config.json: {Name:mk25a3277f7a7d7912d2f69cefd7d4d52b5b4338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 23:04:29.303989    7516 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 23:04:29.303989    7516 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:04:29.303989    7516 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:04:29.303989    7516 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 23:04:29.303989    7516 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 23:04:29.303989    7516 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 23:04:29.303989    7516 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 23:04:29.303989    7516 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 23:04:29.303989    7516 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:04:31.593615    7516 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 23:04:31.593615    7516 cache.go:206] Successfully downloaded all kic artifacts
	I0516 23:04:31.593615    7516 start.go:352] acquiring machines lock for custom-weave-20220516225309-2444: {Name:mk686c9d2a98b2affc3ee1777cdf0baf43f1a69f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:04:31.593615    7516 start.go:356] acquired machines lock for "custom-weave-20220516225309-2444" in 0s
	I0516 23:04:31.594611    7516 start.go:91] Provisioning new machine with config: &{Name:custom-weave-20220516225309-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:custom-weave-20220516225309-2444 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 23:04:31.594611    7516 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:04:31.602616    7516 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:04:31.602616    7516 start.go:165] libmachine.API.Create for "custom-weave-20220516225309-2444" (driver="docker")
	I0516 23:04:31.602616    7516 client.go:168] LocalClient.Create starting
	I0516 23:04:31.603607    7516 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:04:31.603607    7516 main.go:134] libmachine: Decoding PEM data...
	I0516 23:04:31.603607    7516 main.go:134] libmachine: Parsing certificate...
	I0516 23:04:31.603607    7516 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:04:31.603607    7516 main.go:134] libmachine: Decoding PEM data...
	I0516 23:04:31.603607    7516 main.go:134] libmachine: Parsing certificate...
	I0516 23:04:31.613615    7516 cli_runner.go:164] Run: docker network inspect custom-weave-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:04:32.721382    7516 cli_runner.go:211] docker network inspect custom-weave-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:04:32.721382    7516 cli_runner.go:217] Completed: docker network inspect custom-weave-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1075276s)
	I0516 23:04:32.728978    7516 network_create.go:272] running [docker network inspect custom-weave-20220516225309-2444] to gather additional debugging logs...
	I0516 23:04:32.728978    7516 cli_runner.go:164] Run: docker network inspect custom-weave-20220516225309-2444
	W0516 23:04:33.758182    7516 cli_runner.go:211] docker network inspect custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:04:33.758182    7516 cli_runner.go:217] Completed: docker network inspect custom-weave-20220516225309-2444: (1.0291942s)
	I0516 23:04:33.758182    7516 network_create.go:275] error running [docker network inspect custom-weave-20220516225309-2444]: docker network inspect custom-weave-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220516225309-2444
	I0516 23:04:33.758182    7516 network_create.go:277] output of [docker network inspect custom-weave-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220516225309-2444
	
	** /stderr **
	I0516 23:04:33.765165    7516 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:04:34.851554    7516 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0863263s)
	I0516 23:04:34.872748    7516 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00010c6e8] misses:0}
	I0516 23:04:34.873177    7516 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:34.873177    7516 network_create.go:115] attempt to create docker network custom-weave-20220516225309-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 23:04:34.881541    7516 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444
	W0516 23:04:35.953742    7516 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:04:35.953829    7516 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444: (1.0720414s)
	W0516 23:04:35.953863    7516 network_create.go:107] failed to create docker network custom-weave-20220516225309-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 23:04:35.973404    7516 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010c6e8] amended:false}} dirty:map[] misses:0}
	I0516 23:04:35.973404    7516 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:35.992901    7516 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010c6e8] amended:true}} dirty:map[192.168.49.0:0xc00010c6e8 192.168.58.0:0xc00010c7b0] misses:0}
	I0516 23:04:35.992901    7516 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:35.992901    7516 network_create.go:115] attempt to create docker network custom-weave-20220516225309-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 23:04:36.000059    7516 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444
	W0516 23:04:37.129820    7516 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:04:37.129820    7516 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444: (1.128614s)
	W0516 23:04:37.129820    7516 network_create.go:107] failed to create docker network custom-weave-20220516225309-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 23:04:37.149308    7516 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010c6e8] amended:true}} dirty:map[192.168.49.0:0xc00010c6e8 192.168.58.0:0xc00010c7b0] misses:1}
	I0516 23:04:37.149308    7516 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:37.167309    7516 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010c6e8] amended:true}} dirty:map[192.168.49.0:0xc00010c6e8 192.168.58.0:0xc00010c7b0 192.168.67.0:0xc000a8e2f0] misses:1}
	I0516 23:04:37.167841    7516 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:37.167841    7516 network_create.go:115] attempt to create docker network custom-weave-20220516225309-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 23:04:37.175058    7516 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444
	W0516 23:04:38.249223    7516 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:04:38.249223    7516 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444: (1.074156s)
	W0516 23:04:38.249223    7516 network_create.go:107] failed to create docker network custom-weave-20220516225309-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 23:04:38.271982    7516 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010c6e8] amended:true}} dirty:map[192.168.49.0:0xc00010c6e8 192.168.58.0:0xc00010c7b0 192.168.67.0:0xc000a8e2f0] misses:2}
	I0516 23:04:38.272525    7516 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:38.290623    7516 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010c6e8] amended:true}} dirty:map[192.168.49.0:0xc00010c6e8 192.168.58.0:0xc00010c7b0 192.168.67.0:0xc000a8e2f0 192.168.76.0:0xc00010c848] misses:2}
	I0516 23:04:38.290623    7516 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:38.290623    7516 network_create.go:115] attempt to create docker network custom-weave-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 23:04:38.298718    7516 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444
	W0516 23:04:39.405747    7516 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:04:39.405747    7516 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444: (1.1070194s)
	E0516 23:04:39.405747    7516 network_create.go:104] error while trying to create docker network custom-weave-20220516225309-2444 192.168.76.0/24: create docker network custom-weave-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 60c9a4acea4b3273f67a4867e26183ac914d0a22f84ab09c773dc343c285d93e (br-60c9a4acea4b): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 23:04:39.405747    7516 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network custom-weave-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 60c9a4acea4b3273f67a4867e26183ac914d0a22f84ab09c773dc343c285d93e (br-60c9a4acea4b): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network custom-weave-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 60c9a4acea4b3273f67a4867e26183ac914d0a22f84ab09c773dc343c285d93e (br-60c9a4acea4b): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 23:04:39.422982    7516 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:04:40.520992    7516 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0980001s)
	I0516 23:04:40.527978    7516 cli_runner.go:164] Run: docker volume create custom-weave-20220516225309-2444 --label name.minikube.sigs.k8s.io=custom-weave-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:04:41.629596    7516 cli_runner.go:211] docker volume create custom-weave-20220516225309-2444 --label name.minikube.sigs.k8s.io=custom-weave-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:04:41.629596    7516 cli_runner.go:217] Completed: docker volume create custom-weave-20220516225309-2444 --label name.minikube.sigs.k8s.io=custom-weave-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: (1.1016088s)
	I0516 23:04:41.632076    7516 client.go:171] LocalClient.Create took 10.0293733s
	I0516 23:04:43.662915    7516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:04:43.673923    7516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444
	W0516 23:04:44.753791    7516 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:04:44.753791    7516 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: (1.0798588s)
	I0516 23:04:44.753791    7516 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:04:45.045482    7516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444
	W0516 23:04:46.119322    7516 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:04:46.119322    7516 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: (1.0736833s)
	W0516 23:04:46.119322    7516 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	
	W0516 23:04:46.119322    7516 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:04:46.129173    7516 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:04:46.137170    7516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444
	W0516 23:04:47.223153    7516 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:04:47.223153    7516 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: (1.0859742s)
	I0516 23:04:47.223153    7516 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:04:47.531999    7516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444
	W0516 23:04:48.649154    7516 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:04:48.649154    7516 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: (1.1170618s)
	W0516 23:04:48.649154    7516 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	
	W0516 23:04:48.649154    7516 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:04:48.649154    7516 start.go:134] duration metric: createHost completed in 17.0543941s
	I0516 23:04:48.649154    7516 start.go:81] releasing machines lock for "custom-weave-20220516225309-2444", held for 17.0553901s
	W0516 23:04:48.649154    7516 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for custom-weave-20220516225309-2444 container: docker volume create custom-weave-20220516225309-2444 --label name.minikube.sigs.k8s.io=custom-weave-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create custom-weave-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/custom-weave-20220516225309-2444': mkdir /var/lib/docker/volumes/custom-weave-20220516225309-2444: read-only file system
	I0516 23:04:48.666150    7516 cli_runner.go:164] Run: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}
	W0516 23:04:49.785818    7516 cli_runner.go:211] docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:49.785818    7516 cli_runner.go:217] Completed: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: (1.1196586s)
	I0516 23:04:49.785818    7516 delete.go:82] Unable to get host status for custom-weave-20220516225309-2444, assuming it has already been deleted: state: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	W0516 23:04:49.785818    7516 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for custom-weave-20220516225309-2444 container: docker volume create custom-weave-20220516225309-2444 --label name.minikube.sigs.k8s.io=custom-weave-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create custom-weave-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/custom-weave-20220516225309-2444': mkdir /var/lib/docker/volumes/custom-weave-20220516225309-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for custom-weave-20220516225309-2444 container: docker volume create custom-weave-20220516225309-2444 --label name.minikube.sigs.k8s.io=custom-weave-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create custom-weave-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/custom-weave-20220516225309-2444': mkdir /var/lib/docker/volumes/custom-weave-20220516225309-2444: read-only file system
	
	I0516 23:04:49.785818    7516 start.go:623] Will try again in 5 seconds ...
	I0516 23:04:54.793097    7516 start.go:352] acquiring machines lock for custom-weave-20220516225309-2444: {Name:mk686c9d2a98b2affc3ee1777cdf0baf43f1a69f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:04:54.793382    7516 start.go:356] acquired machines lock for "custom-weave-20220516225309-2444" in 116.7µs
	I0516 23:04:54.793382    7516 start.go:94] Skipping create...Using existing machine configuration
	I0516 23:04:54.793382    7516 fix.go:55] fixHost starting: 
	I0516 23:04:54.811298    7516 cli_runner.go:164] Run: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}
	W0516 23:04:55.922352    7516 cli_runner.go:211] docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:55.922406    7516 cli_runner.go:217] Completed: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: (1.1098893s)
	I0516 23:04:55.922406    7516 fix.go:103] recreateIfNeeded on custom-weave-20220516225309-2444: state= err=unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:04:55.922406    7516 fix.go:108] machineExists: false. err=machine does not exist
	I0516 23:04:55.925510    7516 out.go:177] * docker "custom-weave-20220516225309-2444" container is missing, will recreate.
	I0516 23:04:55.928693    7516 delete.go:124] DEMOLISHING custom-weave-20220516225309-2444 ...
	I0516 23:04:55.943859    7516 cli_runner.go:164] Run: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}
	W0516 23:04:57.063537    7516 cli_runner.go:211] docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:57.063537    7516 cli_runner.go:217] Completed: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: (1.1196682s)
	W0516 23:04:57.063537    7516 stop.go:75] unable to get state: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:04:57.063537    7516 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:04:57.086984    7516 cli_runner.go:164] Run: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}
	W0516 23:04:58.171549    7516 cli_runner.go:211] docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:04:58.171549    7516 cli_runner.go:217] Completed: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: (1.0845143s)
	I0516 23:04:58.171549    7516 delete.go:82] Unable to get host status for custom-weave-20220516225309-2444, assuming it has already been deleted: state: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:04:58.181652    7516 cli_runner.go:164] Run: docker container inspect -f {{.Id}} custom-weave-20220516225309-2444
	W0516 23:04:59.309455    7516 cli_runner.go:211] docker container inspect -f {{.Id}} custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:04:59.309455    7516 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} custom-weave-20220516225309-2444: (1.1277931s)
	I0516 23:04:59.309455    7516 kic.go:356] could not find the container custom-weave-20220516225309-2444 to remove it. will try anyways
	I0516 23:04:59.318441    7516 cli_runner.go:164] Run: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}
	W0516 23:05:00.418243    7516 cli_runner.go:211] docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:00.418243    7516 cli_runner.go:217] Completed: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: (1.0997921s)
	W0516 23:05:00.418243    7516 oci.go:84] error getting container status, will try to delete anyways: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:00.426240    7516 cli_runner.go:164] Run: docker exec --privileged -t custom-weave-20220516225309-2444 /bin/bash -c "sudo init 0"
	W0516 23:05:01.510025    7516 cli_runner.go:211] docker exec --privileged -t custom-weave-20220516225309-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 23:05:01.510025    7516 cli_runner.go:217] Completed: docker exec --privileged -t custom-weave-20220516225309-2444 /bin/bash -c "sudo init 0": (1.0837756s)
	I0516 23:05:01.510025    7516 oci.go:641] error shutdown custom-weave-20220516225309-2444: docker exec --privileged -t custom-weave-20220516225309-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:02.519506    7516 cli_runner.go:164] Run: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}
	W0516 23:05:03.714197    7516 cli_runner.go:211] docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:03.714197    7516 cli_runner.go:217] Completed: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: (1.1946807s)
	I0516 23:05:03.714438    7516 oci.go:653] temporary error verifying shutdown: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:03.714494    7516 oci.go:655] temporary error: container custom-weave-20220516225309-2444 status is  but expect it to be exited
	I0516 23:05:03.714580    7516 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:04.209309    7516 cli_runner.go:164] Run: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}
	W0516 23:05:05.323662    7516 cli_runner.go:211] docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:05.323879    7516 cli_runner.go:217] Completed: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: (1.1142111s)
	I0516 23:05:05.323943    7516 oci.go:653] temporary error verifying shutdown: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:05.324000    7516 oci.go:655] temporary error: container custom-weave-20220516225309-2444 status is  but expect it to be exited
	I0516 23:05:05.324062    7516 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:06.229406    7516 cli_runner.go:164] Run: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}
	W0516 23:05:07.332173    7516 cli_runner.go:211] docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:07.332173    7516 cli_runner.go:217] Completed: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: (1.1027574s)
	I0516 23:05:07.332173    7516 oci.go:653] temporary error verifying shutdown: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:07.332173    7516 oci.go:655] temporary error: container custom-weave-20220516225309-2444 status is  but expect it to be exited
	I0516 23:05:07.332173    7516 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:07.982603    7516 cli_runner.go:164] Run: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}
	W0516 23:05:09.092925    7516 cli_runner.go:211] docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:09.093158    7516 cli_runner.go:217] Completed: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: (1.1103118s)
	I0516 23:05:09.093210    7516 oci.go:653] temporary error verifying shutdown: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:09.093262    7516 oci.go:655] temporary error: container custom-weave-20220516225309-2444 status is  but expect it to be exited
	I0516 23:05:09.093262    7516 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:10.215059    7516 cli_runner.go:164] Run: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}
	W0516 23:05:11.281830    7516 cli_runner.go:211] docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:11.281830    7516 cli_runner.go:217] Completed: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: (1.0667611s)
	I0516 23:05:11.281830    7516 oci.go:653] temporary error verifying shutdown: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:11.281830    7516 oci.go:655] temporary error: container custom-weave-20220516225309-2444 status is  but expect it to be exited
	I0516 23:05:11.281830    7516 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:12.813944    7516 cli_runner.go:164] Run: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}
	W0516 23:05:13.906040    7516 cli_runner.go:211] docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:13.906220    7516 cli_runner.go:217] Completed: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: (1.0920036s)
	I0516 23:05:13.906313    7516 oci.go:653] temporary error verifying shutdown: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:13.906357    7516 oci.go:655] temporary error: container custom-weave-20220516225309-2444 status is  but expect it to be exited
	I0516 23:05:13.906414    7516 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:16.963278    7516 cli_runner.go:164] Run: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}
	W0516 23:05:18.071527    7516 cli_runner.go:211] docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:18.071527    7516 cli_runner.go:217] Completed: docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: (1.1082393s)
	I0516 23:05:18.071527    7516 oci.go:653] temporary error verifying shutdown: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:18.071527    7516 oci.go:655] temporary error: container custom-weave-20220516225309-2444 status is  but expect it to be exited
	I0516 23:05:18.071527    7516 oci.go:88] couldn't shut down custom-weave-20220516225309-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "custom-weave-20220516225309-2444": docker container inspect custom-weave-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	 
	I0516 23:05:18.079532    7516 cli_runner.go:164] Run: docker rm -f -v custom-weave-20220516225309-2444
	I0516 23:05:19.221946    7516 cli_runner.go:217] Completed: docker rm -f -v custom-weave-20220516225309-2444: (1.1423461s)
	I0516 23:05:19.229966    7516 cli_runner.go:164] Run: docker container inspect -f {{.Id}} custom-weave-20220516225309-2444
	W0516 23:05:20.318350    7516 cli_runner.go:211] docker container inspect -f {{.Id}} custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:05:20.318350    7516 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} custom-weave-20220516225309-2444: (1.0883742s)
	I0516 23:05:20.326336    7516 cli_runner.go:164] Run: docker network inspect custom-weave-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:05:21.464729    7516 cli_runner.go:211] docker network inspect custom-weave-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:05:21.464809    7516 cli_runner.go:217] Completed: docker network inspect custom-weave-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1381457s)
	I0516 23:05:21.473003    7516 network_create.go:272] running [docker network inspect custom-weave-20220516225309-2444] to gather additional debugging logs...
	I0516 23:05:21.473003    7516 cli_runner.go:164] Run: docker network inspect custom-weave-20220516225309-2444
	W0516 23:05:22.637990    7516 cli_runner.go:211] docker network inspect custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:05:22.637990    7516 cli_runner.go:217] Completed: docker network inspect custom-weave-20220516225309-2444: (1.1648826s)
	I0516 23:05:22.637990    7516 network_create.go:275] error running [docker network inspect custom-weave-20220516225309-2444]: docker network inspect custom-weave-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220516225309-2444
	I0516 23:05:22.637990    7516 network_create.go:277] output of [docker network inspect custom-weave-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220516225309-2444
	
	** /stderr **
	W0516 23:05:22.639209    7516 delete.go:139] delete failed (probably ok) <nil>
	I0516 23:05:22.639209    7516 fix.go:115] Sleeping 1 second for extra luck!
	I0516 23:05:23.649393    7516 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:05:23.657729    7516 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:05:23.657729    7516 start.go:165] libmachine.API.Create for "custom-weave-20220516225309-2444" (driver="docker")
	I0516 23:05:23.657729    7516 client.go:168] LocalClient.Create starting
	I0516 23:05:23.658716    7516 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:05:23.658716    7516 main.go:134] libmachine: Decoding PEM data...
	I0516 23:05:23.658716    7516 main.go:134] libmachine: Parsing certificate...
	I0516 23:05:23.658716    7516 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:05:23.659385    7516 main.go:134] libmachine: Decoding PEM data...
	I0516 23:05:23.659385    7516 main.go:134] libmachine: Parsing certificate...
	I0516 23:05:23.670748    7516 cli_runner.go:164] Run: docker network inspect custom-weave-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:05:24.753498    7516 cli_runner.go:211] docker network inspect custom-weave-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:05:24.753498    7516 cli_runner.go:217] Completed: docker network inspect custom-weave-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0827407s)
	I0516 23:05:24.762511    7516 network_create.go:272] running [docker network inspect custom-weave-20220516225309-2444] to gather additional debugging logs...
	I0516 23:05:24.762511    7516 cli_runner.go:164] Run: docker network inspect custom-weave-20220516225309-2444
	W0516 23:05:25.864877    7516 cli_runner.go:211] docker network inspect custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:05:25.864877    7516 cli_runner.go:217] Completed: docker network inspect custom-weave-20220516225309-2444: (1.1023568s)
	I0516 23:05:25.864877    7516 network_create.go:275] error running [docker network inspect custom-weave-20220516225309-2444]: docker network inspect custom-weave-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220516225309-2444
	I0516 23:05:25.864877    7516 network_create.go:277] output of [docker network inspect custom-weave-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220516225309-2444
	
	** /stderr **
	I0516 23:05:25.874279    7516 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:05:27.023044    7516 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1487554s)
	I0516 23:05:27.039765    7516 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010c6e8] amended:true}} dirty:map[192.168.49.0:0xc00010c6e8 192.168.58.0:0xc00010c7b0 192.168.67.0:0xc000a8e2f0 192.168.76.0:0xc00010c848] misses:2}
	I0516 23:05:27.039847    7516 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:05:27.054690    7516 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010c6e8] amended:true}} dirty:map[192.168.49.0:0xc00010c6e8 192.168.58.0:0xc00010c7b0 192.168.67.0:0xc000a8e2f0 192.168.76.0:0xc00010c848] misses:3}
	I0516 23:05:27.054690    7516 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:05:27.069744    7516 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010c6e8 192.168.58.0:0xc00010c7b0 192.168.67.0:0xc000a8e2f0 192.168.76.0:0xc00010c848] amended:false}} dirty:map[] misses:0}
	I0516 23:05:27.069744    7516 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:05:27.086478    7516 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010c6e8 192.168.58.0:0xc00010c7b0 192.168.67.0:0xc000a8e2f0 192.168.76.0:0xc00010c848] amended:false}} dirty:map[] misses:0}
	I0516 23:05:27.086478    7516 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:05:27.103704    7516 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00010c6e8 192.168.58.0:0xc00010c7b0 192.168.67.0:0xc000a8e2f0 192.168.76.0:0xc00010c848] amended:true}} dirty:map[192.168.49.0:0xc00010c6e8 192.168.58.0:0xc00010c7b0 192.168.67.0:0xc000a8e2f0 192.168.76.0:0xc00010c848 192.168.85.0:0xc000398198] misses:0}
	I0516 23:05:27.103743    7516 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:05:27.103743    7516 network_create.go:115] attempt to create docker network custom-weave-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 23:05:27.110365    7516 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444
	W0516 23:05:28.230356    7516 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:05:28.230449    7516 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444: (1.1198123s)
	E0516 23:05:28.230497    7516 network_create.go:104] error while trying to create docker network custom-weave-20220516225309-2444 192.168.85.0/24: create docker network custom-weave-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6467407e6f62cd6d87b68c40b4d08e787b3364b5687baf427196357b41551db6 (br-6467407e6f62): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 23:05:28.230497    7516 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network custom-weave-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6467407e6f62cd6d87b68c40b4d08e787b3364b5687baf427196357b41551db6 (br-6467407e6f62): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network custom-weave-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6467407e6f62cd6d87b68c40b4d08e787b3364b5687baf427196357b41551db6 (br-6467407e6f62): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 23:05:28.249237    7516 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:05:29.356756    7516 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1075096s)
	I0516 23:05:29.365252    7516 cli_runner.go:164] Run: docker volume create custom-weave-20220516225309-2444 --label name.minikube.sigs.k8s.io=custom-weave-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:05:30.457573    7516 cli_runner.go:211] docker volume create custom-weave-20220516225309-2444 --label name.minikube.sigs.k8s.io=custom-weave-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:05:30.457621    7516 cli_runner.go:217] Completed: docker volume create custom-weave-20220516225309-2444 --label name.minikube.sigs.k8s.io=custom-weave-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0921283s)
	I0516 23:05:30.457671    7516 client.go:171] LocalClient.Create took 6.7998819s
	I0516 23:05:32.470326    7516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:05:32.478272    7516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444
	W0516 23:05:33.606152    7516 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:05:33.606202    7516 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: (1.1275579s)
	I0516 23:05:33.606202    7516 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:33.946064    7516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444
	W0516 23:05:35.034446    7516 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:05:35.034723    7516 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: (1.088373s)
	W0516 23:05:35.034891    7516 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	
	W0516 23:05:35.034964    7516 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:35.049670    7516 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:05:35.057484    7516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444
	W0516 23:05:36.151609    7516 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:05:36.151609    7516 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: (1.0940788s)
	I0516 23:05:36.152010    7516 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:36.383062    7516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444
	W0516 23:05:37.494610    7516 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:05:37.494610    7516 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: (1.1115384s)
	W0516 23:05:37.494610    7516 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	
	W0516 23:05:37.494610    7516 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:37.494610    7516 start.go:134] duration metric: createHost completed in 13.8450949s
	I0516 23:05:37.508617    7516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:05:37.518611    7516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444
	W0516 23:05:38.656201    7516 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:05:38.656346    7516 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: (1.1374073s)
	I0516 23:05:38.656492    7516 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:38.913140    7516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444
	W0516 23:05:40.012929    7516 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:05:40.012929    7516 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: (1.0997793s)
	W0516 23:05:40.012929    7516 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	
	W0516 23:05:40.012929    7516 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:40.023658    7516 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:05:40.037246    7516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444
	W0516 23:05:41.163895    7516 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:05:41.163982    7516 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: (1.1266396s)
	I0516 23:05:41.164237    7516 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:41.380680    7516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444
	W0516 23:05:42.545737    7516 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444 returned with exit code 1
	I0516 23:05:42.545737    7516 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: (1.1650469s)
	W0516 23:05:42.545737    7516 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	
	W0516 23:05:42.545737    7516 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20220516225309-2444
	I0516 23:05:42.545737    7516 fix.go:57] fixHost completed within 47.7519355s
	I0516 23:05:42.546396    7516 start.go:81] releasing machines lock for "custom-weave-20220516225309-2444", held for 47.7525936s
	W0516 23:05:42.546556    7516 out.go:239] * Failed to start docker container. Running "minikube delete -p custom-weave-20220516225309-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for custom-weave-20220516225309-2444 container: docker volume create custom-weave-20220516225309-2444 --label name.minikube.sigs.k8s.io=custom-weave-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create custom-weave-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/custom-weave-20220516225309-2444': mkdir /var/lib/docker/volumes/custom-weave-20220516225309-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p custom-weave-20220516225309-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for custom-weave-20220516225309-2444 container: docker volume create custom-weave-20220516225309-2444 --label name.minikube.sigs.k8s.io=custom-weave-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create custom-weave-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/custom-weave-20220516225309-2444': mkdir /var/lib/docker/volumes/custom-weave-20220516225309-2444: read-only file system
	
	I0516 23:05:42.553616    7516 out.go:177] 
	W0516 23:05:42.557385    7516 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for custom-weave-20220516225309-2444 container: docker volume create custom-weave-20220516225309-2444 --label name.minikube.sigs.k8s.io=custom-weave-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create custom-weave-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/custom-weave-20220516225309-2444': mkdir /var/lib/docker/volumes/custom-weave-20220516225309-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for custom-weave-20220516225309-2444 container: docker volume create custom-weave-20220516225309-2444 --label name.minikube.sigs.k8s.io=custom-weave-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create custom-weave-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/custom-weave-20220516225309-2444': mkdir /var/lib/docker/volumes/custom-weave-20220516225309-2444: read-only file system
	
	W0516 23:05:42.557385    7516 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 23:05:42.557935    7516 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 23:05:42.560432    7516 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (81.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-20220516225301-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p enable-default-cni-20220516225301-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: exit status 60 (1m22.2863372s)

                                                
                                                
-- stdout --
	* [enable-default-cni-20220516225301-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node enable-default-cni-20220516225301-2444 in cluster enable-default-cni-20220516225301-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "enable-default-cni-20220516225301-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:04:40.601291     200 out.go:296] Setting OutFile to fd 1392 ...
	I0516 23:04:40.666809     200 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:04:40.666809     200 out.go:309] Setting ErrFile to fd 1520...
	I0516 23:04:40.666809     200 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:04:40.679441     200 out.go:303] Setting JSON to false
	I0516 23:04:40.682245     200 start.go:115] hostinfo: {"hostname":"minikube2","uptime":5393,"bootTime":1652736887,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 23:04:40.683262     200 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 23:04:40.687341     200 out.go:177] * [enable-default-cni-20220516225301-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 23:04:40.691705     200 notify.go:193] Checking for updates...
	I0516 23:04:40.693245     200 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 23:04:40.696241     200 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 23:04:40.698474     200 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 23:04:40.701052     200 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 23:04:40.704783     200 config.go:178] Loaded profile config "custom-weave-20220516225309-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:04:40.705072     200 config.go:178] Loaded profile config "default-k8s-different-port-20220516230045-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:04:40.705645     200 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:04:40.705909     200 config.go:178] Loaded profile config "newest-cni-20220516230100-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:04:40.705909     200 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 23:04:43.429428     200 docker.go:137] docker version: linux-20.10.14
	I0516 23:04:43.438900     200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:04:45.539426     200 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1002415s)
	I0516 23:04:45.540346     200 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:04:44.5021344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:04:45.544406     200 out.go:177] * Using the docker driver based on user configuration
	I0516 23:04:45.546574     200 start.go:284] selected driver: docker
	I0516 23:04:45.546574     200 start.go:806] validating driver "docker" against <nil>
	I0516 23:04:45.546651     200 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 23:04:45.613217     200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:04:47.774952     200 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1616811s)
	I0516 23:04:47.774952     200 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:04:46.7045483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:04:47.774952     200 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	E0516 23:04:47.775941     200 start_flags.go:444] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0516 23:04:47.775941     200 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 23:04:47.778974     200 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 23:04:47.782945     200 cni.go:95] Creating CNI manager for "bridge"
	I0516 23:04:47.782945     200 start_flags.go:301] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0516 23:04:47.782945     200 start_flags.go:306] config:
	{Name:enable-default-cni-20220516225301-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:enable-default-cni-20220516225301-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 23:04:47.785944     200 out.go:177] * Starting control plane node enable-default-cni-20220516225301-2444 in cluster enable-default-cni-20220516225301-2444
	I0516 23:04:47.789944     200 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 23:04:47.796945     200 out.go:177] * Pulling base image ...
	I0516 23:04:47.798941     200 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 23:04:47.798941     200 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 23:04:47.798941     200 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 23:04:47.798941     200 cache.go:57] Caching tarball of preloaded images
	I0516 23:04:47.799941     200 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 23:04:47.799941     200 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 23:04:47.799941     200 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\enable-default-cni-20220516225301-2444\config.json ...
	I0516 23:04:47.799941     200 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\enable-default-cni-20220516225301-2444\config.json: {Name:mk9a0b72157548b32f30abcb5a4044f405ca7763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 23:04:48.916782     200 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 23:04:48.916782     200 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:04:48.916782     200 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:04:48.916782     200 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 23:04:48.916782     200 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 23:04:48.916782     200 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 23:04:48.916782     200 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 23:04:48.916782     200 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 23:04:48.916782     200 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:04:51.296333     200 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 23:04:51.296443     200 cache.go:206] Successfully downloaded all kic artifacts
	I0516 23:04:51.296558     200 start.go:352] acquiring machines lock for enable-default-cni-20220516225301-2444: {Name:mk7269b42ae508cdb359b0bf2e86f75155a5a745 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:04:51.296703     200 start.go:356] acquired machines lock for "enable-default-cni-20220516225301-2444" in 43.2µs
	I0516 23:04:51.298001     200 start.go:91] Provisioning new machine with config: &{Name:enable-default-cni-20220516225301-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:enable-default-cni-20220516225301-2444 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 23:04:51.298159     200 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:04:51.302779     200 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:04:51.303331     200 start.go:165] libmachine.API.Create for "enable-default-cni-20220516225301-2444" (driver="docker")
	I0516 23:04:51.303399     200 client.go:168] LocalClient.Create starting
	I0516 23:04:51.303539     200 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:04:51.304153     200 main.go:134] libmachine: Decoding PEM data...
	I0516 23:04:51.304193     200 main.go:134] libmachine: Parsing certificate...
	I0516 23:04:51.304443     200 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:04:51.304668     200 main.go:134] libmachine: Decoding PEM data...
	I0516 23:04:51.304704     200 main.go:134] libmachine: Parsing certificate...
	I0516 23:04:51.315485     200 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:04:52.419253     200 cli_runner.go:211] docker network inspect enable-default-cni-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:04:52.419253     200 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1037591s)
	I0516 23:04:52.427884     200 network_create.go:272] running [docker network inspect enable-default-cni-20220516225301-2444] to gather additional debugging logs...
	I0516 23:04:52.427884     200 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220516225301-2444
	W0516 23:04:53.496802     200 cli_runner.go:211] docker network inspect enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:04:53.496919     200 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220516225301-2444: (1.068876s)
	I0516 23:04:53.496970     200 network_create.go:275] error running [docker network inspect enable-default-cni-20220516225301-2444]: docker network inspect enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220516225301-2444
	I0516 23:04:53.496970     200 network_create.go:277] output of [docker network inspect enable-default-cni-20220516225301-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220516225301-2444
	
	** /stderr **
	I0516 23:04:53.506713     200 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:04:54.556428     200 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0494731s)
	I0516 23:04:54.575942     200 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005a4480] misses:0}
	I0516 23:04:54.575942     200 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:54.575942     200 network_create.go:115] attempt to create docker network enable-default-cni-20220516225301-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 23:04:54.585127     200 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444
	W0516 23:04:55.671512     200 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:04:55.674555     200 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444: (1.0863485s)
	W0516 23:04:55.674555     200 network_create.go:107] failed to create docker network enable-default-cni-20220516225301-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 23:04:55.692980     200 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a4480] amended:false}} dirty:map[] misses:0}
	I0516 23:04:55.692980     200 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:55.711200     200 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a4480] amended:true}} dirty:map[192.168.49.0:0xc0005a4480 192.168.58.0:0xc0005a4558] misses:0}
	I0516 23:04:55.712090     200 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:55.712090     200 network_create.go:115] attempt to create docker network enable-default-cni-20220516225301-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 23:04:55.720539     200 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444
	W0516 23:04:56.859876     200 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:04:56.859876     200 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444: (1.1393267s)
	W0516 23:04:56.859876     200 network_create.go:107] failed to create docker network enable-default-cni-20220516225301-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 23:04:56.878241     200 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a4480] amended:true}} dirty:map[192.168.49.0:0xc0005a4480 192.168.58.0:0xc0005a4558] misses:1}
	I0516 23:04:56.878241     200 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:56.897211     200 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a4480] amended:true}} dirty:map[192.168.49.0:0xc0005a4480 192.168.58.0:0xc0005a4558 192.168.67.0:0xc0005a4800] misses:1}
	I0516 23:04:56.897211     200 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:56.897211     200 network_create.go:115] attempt to create docker network enable-default-cni-20220516225301-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 23:04:56.904201     200 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444
	W0516 23:04:58.029200     200 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:04:58.029200     200 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444: (1.1249897s)
	W0516 23:04:58.029315     200 network_create.go:107] failed to create docker network enable-default-cni-20220516225301-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 23:04:58.050512     200 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a4480] amended:true}} dirty:map[192.168.49.0:0xc0005a4480 192.168.58.0:0xc0005a4558 192.168.67.0:0xc0005a4800] misses:2}
	I0516 23:04:58.050744     200 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:58.074875     200 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a4480] amended:true}} dirty:map[192.168.49.0:0xc0005a4480 192.168.58.0:0xc0005a4558 192.168.67.0:0xc0005a4800 192.168.76.0:0xc000006540] misses:2}
	I0516 23:04:58.074875     200 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:04:58.074875     200 network_create.go:115] attempt to create docker network enable-default-cni-20220516225301-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 23:04:58.083501     200 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444
	W0516 23:04:59.209547     200 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:04:59.209547     200 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444: (1.1260363s)
	E0516 23:04:59.209547     200 network_create.go:104] error while trying to create docker network enable-default-cni-20220516225301-2444 192.168.76.0/24: create docker network enable-default-cni-20220516225301-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f0a54e25f483ff8fe89584a6d24446cfaef6074fb3a012f65309f8e9298cd510 (br-f0a54e25f483): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 23:04:59.209547     200 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network enable-default-cni-20220516225301-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f0a54e25f483ff8fe89584a6d24446cfaef6074fb3a012f65309f8e9298cd510 (br-f0a54e25f483): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network enable-default-cni-20220516225301-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f0a54e25f483ff8fe89584a6d24446cfaef6074fb3a012f65309f8e9298cd510 (br-f0a54e25f483): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 23:04:59.223538     200 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:05:00.323951     200 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0994035s)
	I0516 23:05:00.331952     200 cli_runner.go:164] Run: docker volume create enable-default-cni-20220516225301-2444 --label name.minikube.sigs.k8s.io=enable-default-cni-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:05:01.400274     200 cli_runner.go:211] docker volume create enable-default-cni-20220516225301-2444 --label name.minikube.sigs.k8s.io=enable-default-cni-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:05:01.400337     200 cli_runner.go:217] Completed: docker volume create enable-default-cni-20220516225301-2444 --label name.minikube.sigs.k8s.io=enable-default-cni-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0681417s)
	I0516 23:05:01.400337     200 client.go:171] LocalClient.Create took 10.096851s
	I0516 23:05:03.415090     200 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:05:03.422089     200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444
	W0516 23:05:04.565247     200 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:05:04.565247     200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: (1.143148s)
	I0516 23:05:04.565247     200 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:04.854500     200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444
	W0516 23:05:05.954674     200 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:05:05.954674     200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: (1.1001645s)
	W0516 23:05:05.954674     200 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	
	W0516 23:05:05.954674     200 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:05.963668     200 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:05:05.972952     200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444
	W0516 23:05:07.084833     200 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:05:07.084902     200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: (1.1117555s)
	I0516 23:05:07.085081     200 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:07.390238     200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444
	W0516 23:05:08.514533     200 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:05:08.514570     200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: (1.1236365s)
	W0516 23:05:08.514847     200 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	
	W0516 23:05:08.514953     200 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:08.514953     200 start.go:134] duration metric: createHost completed in 17.2166433s
	I0516 23:05:08.515005     200 start.go:81] releasing machines lock for "enable-default-cni-20220516225301-2444", held for 17.2181509s
	W0516 23:05:08.515168     200 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220516225301-2444 container: docker volume create enable-default-cni-20220516225301-2444 --label name.minikube.sigs.k8s.io=enable-default-cni-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220516225301-2444': mkdir /var/lib/docker/volumes/enable-default-cni-20220516225301-2444: read-only file system
	I0516 23:05:08.535182     200 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}
	W0516 23:05:09.683020     200 cli_runner.go:211] docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:09.683067     200 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: (1.1476791s)
	I0516 23:05:09.683163     200 delete.go:82] Unable to get host status for enable-default-cni-20220516225301-2444, assuming it has already been deleted: state: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	W0516 23:05:09.683484     200 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220516225301-2444 container: docker volume create enable-default-cni-20220516225301-2444 --label name.minikube.sigs.k8s.io=enable-default-cni-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220516225301-2444': mkdir /var/lib/docker/volumes/enable-default-cni-20220516225301-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220516225301-2444 container: docker volume create enable-default-cni-20220516225301-2444 --label name.minikube.sigs.k8s.io=enable-default-cni-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220516225301-2444': mkdir /var/lib/docker/volumes/enable-default-cni-20220516225301-2444: read-only file system
	
	I0516 23:05:09.683558     200 start.go:623] Will try again in 5 seconds ...
	I0516 23:05:14.694301     200 start.go:352] acquiring machines lock for enable-default-cni-20220516225301-2444: {Name:mk7269b42ae508cdb359b0bf2e86f75155a5a745 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:05:14.694543     200 start.go:356] acquired machines lock for "enable-default-cni-20220516225301-2444" in 120.7µs
	I0516 23:05:14.694700     200 start.go:94] Skipping create...Using existing machine configuration
	I0516 23:05:14.694731     200 fix.go:55] fixHost starting: 
	I0516 23:05:14.713201     200 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}
	W0516 23:05:15.788710     200 cli_runner.go:211] docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:15.788859     200 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: (1.0750765s)
	I0516 23:05:15.788997     200 fix.go:103] recreateIfNeeded on enable-default-cni-20220516225301-2444: state= err=unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:15.789100     200 fix.go:108] machineExists: false. err=machine does not exist
	I0516 23:05:15.792334     200 out.go:177] * docker "enable-default-cni-20220516225301-2444" container is missing, will recreate.
	I0516 23:05:15.794756     200 delete.go:124] DEMOLISHING enable-default-cni-20220516225301-2444 ...
	I0516 23:05:15.810705     200 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}
	W0516 23:05:16.923749     200 cli_runner.go:211] docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:16.923891     200 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: (1.1128229s)
	W0516 23:05:16.924004     200 stop.go:75] unable to get state: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:16.924004     200 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:16.939940     200 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}
	W0516 23:05:18.055527     200 cli_runner.go:211] docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:18.055527     200 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: (1.1155768s)
	I0516 23:05:18.055527     200 delete.go:82] Unable to get host status for enable-default-cni-20220516225301-2444, assuming it has already been deleted: state: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:18.062526     200 cli_runner.go:164] Run: docker container inspect -f {{.Id}} enable-default-cni-20220516225301-2444
	W0516 23:05:19.175802     200 cli_runner.go:211] docker container inspect -f {{.Id}} enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:05:19.175882     200 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} enable-default-cni-20220516225301-2444: (1.1129097s)
	I0516 23:05:19.175882     200 kic.go:356] could not find the container enable-default-cni-20220516225301-2444 to remove it. will try anyways
	I0516 23:05:19.188800     200 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}
	W0516 23:05:20.302389     200 cli_runner.go:211] docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:20.302389     200 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: (1.1135788s)
	W0516 23:05:20.302389     200 oci.go:84] error getting container status, will try to delete anyways: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:20.310336     200 cli_runner.go:164] Run: docker exec --privileged -t enable-default-cni-20220516225301-2444 /bin/bash -c "sudo init 0"
	W0516 23:05:21.418945     200 cli_runner.go:211] docker exec --privileged -t enable-default-cni-20220516225301-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 23:05:21.418970     200 cli_runner.go:217] Completed: docker exec --privileged -t enable-default-cni-20220516225301-2444 /bin/bash -c "sudo init 0": (1.1084421s)
	I0516 23:05:21.419030     200 oci.go:641] error shutdown enable-default-cni-20220516225301-2444: docker exec --privileged -t enable-default-cni-20220516225301-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:22.443971     200 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}
	W0516 23:05:23.525272     200 cli_runner.go:211] docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:23.525272     200 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: (1.0812547s)
	I0516 23:05:23.525272     200 oci.go:653] temporary error verifying shutdown: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:23.525272     200 oci.go:655] temporary error: container enable-default-cni-20220516225301-2444 status is  but expect it to be exited
	I0516 23:05:23.525272     200 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:24.005926     200 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}
	W0516 23:05:25.124609     200 cli_runner.go:211] docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:25.124609     200 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: (1.1186735s)
	I0516 23:05:25.124609     200 oci.go:653] temporary error verifying shutdown: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:25.124609     200 oci.go:655] temporary error: container enable-default-cni-20220516225301-2444 status is  but expect it to be exited
	I0516 23:05:25.124609     200 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:26.031793     200 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}
	W0516 23:05:27.162145     200 cli_runner.go:211] docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:27.162145     200 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: (1.1302279s)
	I0516 23:05:27.162145     200 oci.go:653] temporary error verifying shutdown: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:27.162145     200 oci.go:655] temporary error: container enable-default-cni-20220516225301-2444 status is  but expect it to be exited
	I0516 23:05:27.162145     200 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:27.819736     200 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}
	W0516 23:05:28.954447     200 cli_runner.go:211] docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:28.954597     200 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: (1.1347012s)
	I0516 23:05:28.954660     200 oci.go:653] temporary error verifying shutdown: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:28.954705     200 oci.go:655] temporary error: container enable-default-cni-20220516225301-2444 status is  but expect it to be exited
	I0516 23:05:28.954743     200 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:30.074965     200 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}
	W0516 23:05:31.208963     200 cli_runner.go:211] docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:31.209022     200 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: (1.1339457s)
	I0516 23:05:31.209071     200 oci.go:653] temporary error verifying shutdown: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:31.209129     200 oci.go:655] temporary error: container enable-default-cni-20220516225301-2444 status is  but expect it to be exited
	I0516 23:05:31.209182     200 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:32.737695     200 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}
	W0516 23:05:33.856915     200 cli_runner.go:211] docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:33.856967     200 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: (1.1191135s)
	I0516 23:05:33.856967     200 oci.go:653] temporary error verifying shutdown: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:33.856967     200 oci.go:655] temporary error: container enable-default-cni-20220516225301-2444 status is  but expect it to be exited
	I0516 23:05:33.856967     200 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:36.915995     200 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}
	W0516 23:05:38.067997     200 cli_runner.go:211] docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:38.067997     200 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: (1.1519921s)
	I0516 23:05:38.067997     200 oci.go:653] temporary error verifying shutdown: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:38.067997     200 oci.go:655] temporary error: container enable-default-cni-20220516225301-2444 status is  but expect it to be exited
	I0516 23:05:38.067997     200 oci.go:88] couldn't shut down enable-default-cni-20220516225301-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220516225301-2444": docker container inspect enable-default-cni-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	 
	I0516 23:05:38.079988     200 cli_runner.go:164] Run: docker rm -f -v enable-default-cni-20220516225301-2444
	I0516 23:05:39.200592     200 cli_runner.go:217] Completed: docker rm -f -v enable-default-cni-20220516225301-2444: (1.1203161s)
	I0516 23:05:39.212140     200 cli_runner.go:164] Run: docker container inspect -f {{.Id}} enable-default-cni-20220516225301-2444
	W0516 23:05:40.340657     200 cli_runner.go:211] docker container inspect -f {{.Id}} enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:05:40.340734     200 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} enable-default-cni-20220516225301-2444: (1.1285079s)
	I0516 23:05:40.356578     200 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:05:41.522062     200 cli_runner.go:211] docker network inspect enable-default-cni-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:05:41.522062     200 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1654073s)
	I0516 23:05:41.529030     200 network_create.go:272] running [docker network inspect enable-default-cni-20220516225301-2444] to gather additional debugging logs...
	I0516 23:05:41.529030     200 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220516225301-2444
	W0516 23:05:42.662529     200 cli_runner.go:211] docker network inspect enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:05:42.662626     200 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220516225301-2444: (1.1323117s)
	I0516 23:05:42.662669     200 network_create.go:275] error running [docker network inspect enable-default-cni-20220516225301-2444]: docker network inspect enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220516225301-2444
	I0516 23:05:42.662669     200 network_create.go:277] output of [docker network inspect enable-default-cni-20220516225301-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220516225301-2444
	
	** /stderr **
	W0516 23:05:42.664171     200 delete.go:139] delete failed (probably ok) <nil>
	I0516 23:05:42.664171     200 fix.go:115] Sleeping 1 second for extra luck!
	I0516 23:05:43.671829     200 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:05:43.675310     200 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:05:43.675612     200 start.go:165] libmachine.API.Create for "enable-default-cni-20220516225301-2444" (driver="docker")
	I0516 23:05:43.675635     200 client.go:168] LocalClient.Create starting
	I0516 23:05:43.676318     200 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:05:43.676608     200 main.go:134] libmachine: Decoding PEM data...
	I0516 23:05:43.676664     200 main.go:134] libmachine: Parsing certificate...
	I0516 23:05:43.676664     200 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:05:43.676664     200 main.go:134] libmachine: Decoding PEM data...
	I0516 23:05:43.676664     200 main.go:134] libmachine: Parsing certificate...
	I0516 23:05:43.687721     200 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:05:44.837969     200 cli_runner.go:211] docker network inspect enable-default-cni-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:05:44.838019     200 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1500451s)
	I0516 23:05:44.846578     200 network_create.go:272] running [docker network inspect enable-default-cni-20220516225301-2444] to gather additional debugging logs...
	I0516 23:05:44.846578     200 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220516225301-2444
	W0516 23:05:45.971369     200 cli_runner.go:211] docker network inspect enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:05:45.971432     200 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220516225301-2444: (1.1246463s)
	I0516 23:05:45.971432     200 network_create.go:275] error running [docker network inspect enable-default-cni-20220516225301-2444]: docker network inspect enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220516225301-2444
	I0516 23:05:45.971432     200 network_create.go:277] output of [docker network inspect enable-default-cni-20220516225301-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220516225301-2444
	
	** /stderr **
	I0516 23:05:45.979505     200 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:05:47.068243     200 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0886688s)
	I0516 23:05:47.087895     200 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a4480] amended:true}} dirty:map[192.168.49.0:0xc0005a4480 192.168.58.0:0xc0005a4558 192.168.67.0:0xc0005a4800 192.168.76.0:0xc000006540] misses:2}
	I0516 23:05:47.087962     200 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:05:47.104679     200 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a4480] amended:true}} dirty:map[192.168.49.0:0xc0005a4480 192.168.58.0:0xc0005a4558 192.168.67.0:0xc0005a4800 192.168.76.0:0xc000006540] misses:3}
	I0516 23:05:47.104679     200 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:05:47.122139     200 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a4480 192.168.58.0:0xc0005a4558 192.168.67.0:0xc0005a4800 192.168.76.0:0xc000006540] amended:false}} dirty:map[] misses:0}
	I0516 23:05:47.122139     200 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:05:47.140147     200 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a4480 192.168.58.0:0xc0005a4558 192.168.67.0:0xc0005a4800 192.168.76.0:0xc000006540] amended:false}} dirty:map[] misses:0}
	I0516 23:05:47.140147     200 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:05:47.156171     200 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a4480 192.168.58.0:0xc0005a4558 192.168.67.0:0xc0005a4800 192.168.76.0:0xc000006540] amended:true}} dirty:map[192.168.49.0:0xc0005a4480 192.168.58.0:0xc0005a4558 192.168.67.0:0xc0005a4800 192.168.76.0:0xc000006540 192.168.85.0:0xc0006d07d0] misses:0}
	I0516 23:05:47.156171     200 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:05:47.156171     200 network_create.go:115] attempt to create docker network enable-default-cni-20220516225301-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 23:05:47.165832     200 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444
	W0516 23:05:48.222804     200 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:05:48.222804     200 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444: (1.0569629s)
	E0516 23:05:48.222804     200 network_create.go:104] error while trying to create docker network enable-default-cni-20220516225301-2444 192.168.85.0/24: create docker network enable-default-cni-20220516225301-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 20cb60146de8d4573cb33aac7698f9cc5ac40e3fc72ed565612e4030c9629005 (br-20cb60146de8): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 23:05:48.222804     200 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network enable-default-cni-20220516225301-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 20cb60146de8d4573cb33aac7698f9cc5ac40e3fc72ed565612e4030c9629005 (br-20cb60146de8): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network enable-default-cni-20220516225301-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 20cb60146de8d4573cb33aac7698f9cc5ac40e3fc72ed565612e4030c9629005 (br-20cb60146de8): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 23:05:48.246309     200 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:05:49.349365     200 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.10287s)
	I0516 23:05:49.362091     200 cli_runner.go:164] Run: docker volume create enable-default-cni-20220516225301-2444 --label name.minikube.sigs.k8s.io=enable-default-cni-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:05:50.443769     200 cli_runner.go:211] docker volume create enable-default-cni-20220516225301-2444 --label name.minikube.sigs.k8s.io=enable-default-cni-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:05:50.443769     200 cli_runner.go:217] Completed: docker volume create enable-default-cni-20220516225301-2444 --label name.minikube.sigs.k8s.io=enable-default-cni-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0815612s)
	I0516 23:05:50.443769     200 client.go:171] LocalClient.Create took 6.7680231s
	I0516 23:05:52.458135     200 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:05:52.465237     200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444
	W0516 23:05:53.587300     200 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:05:53.587300     200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: (1.1220532s)
	I0516 23:05:53.587300     200 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:53.933694     200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444
	W0516 23:05:55.054929     200 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:05:55.054929     200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: (1.1212244s)
	W0516 23:05:55.054929     200 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	
	W0516 23:05:55.054929     200 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:55.067043     200 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:05:55.075012     200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444
	W0516 23:05:56.257577     200 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:05:56.257577     200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: (1.1825214s)
	I0516 23:05:56.257577     200 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:56.501265     200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444
	W0516 23:05:57.621090     200 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:05:57.621090     200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: (1.1196737s)
	W0516 23:05:57.621090     200 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	
	W0516 23:05:57.621090     200 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:57.621090     200 start.go:134] duration metric: createHost completed in 13.9490719s
	I0516 23:05:57.631037     200 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:05:57.638108     200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444
	W0516 23:05:58.741140     200 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:05:58.741140     200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: (1.1030216s)
	I0516 23:05:58.741140     200 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:05:58.999830     200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444
	W0516 23:06:00.131792     200 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:06:00.131983     200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: (1.1317705s)
	W0516 23:06:00.131983     200 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	
	W0516 23:06:00.131983     200 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:06:00.143207     200 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:06:00.150148     200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444
	W0516 23:06:01.265311     200 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:06:01.265372     200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: (1.1150022s)
	I0516 23:06:01.265680     200 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:06:01.484956     200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444
	W0516 23:06:02.602893     200 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444 returned with exit code 1
	I0516 23:06:02.602893     200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: (1.1179266s)
	W0516 23:06:02.602893     200 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	
	W0516 23:06:02.602893     200 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220516225301-2444
	I0516 23:06:02.602893     200 fix.go:57] fixHost completed within 47.9077397s
	I0516 23:06:02.602893     200 start.go:81] releasing machines lock for "enable-default-cni-20220516225301-2444", held for 47.9078718s
	W0516 23:06:02.602893     200 out.go:239] * Failed to start docker container. Running "minikube delete -p enable-default-cni-20220516225301-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220516225301-2444 container: docker volume create enable-default-cni-20220516225301-2444 --label name.minikube.sigs.k8s.io=enable-default-cni-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220516225301-2444': mkdir /var/lib/docker/volumes/enable-default-cni-20220516225301-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p enable-default-cni-20220516225301-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220516225301-2444 container: docker volume create enable-default-cni-20220516225301-2444 --label name.minikube.sigs.k8s.io=enable-default-cni-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220516225301-2444': mkdir /var/lib/docker/volumes/enable-default-cni-20220516225301-2444: read-only file system
	
	I0516 23:06:02.607901     200 out.go:177] 
	W0516 23:06:02.608877     200 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220516225301-2444 container: docker volume create enable-default-cni-20220516225301-2444 --label name.minikube.sigs.k8s.io=enable-default-cni-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220516225301-2444': mkdir /var/lib/docker/volumes/enable-default-cni-20220516225301-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220516225301-2444 container: docker volume create enable-default-cni-20220516225301-2444 --label name.minikube.sigs.k8s.io=enable-default-cni-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220516225301-2444': mkdir /var/lib/docker/volumes/enable-default-cni-20220516225301-2444: read-only file system
	
	W0516 23:06:02.608877     200 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 23:06:02.608877     200 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 23:06:02.613832     200 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (82.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (4.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20220516230045-2444" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220516230045-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220516230045-2444: exit status 1 (1.1419732s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444: exit status 7 (2.9049273s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:05:11.499180    9104 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220516230045-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (4.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (7.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-20220516230100-2444 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p newest-cni-20220516230100-2444 "sudo crictl images -o json": exit status 80 (3.2081654s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_4.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p newest-cni-20220516230100-2444 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:306: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:306: v1.23.6 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.1-0",
- 	"k8s.gcr.io/kube-apiserver:v1.23.6",
- 	"k8s.gcr.io/kube-controller-manager:v1.23.6",
- 	"k8s.gcr.io/kube-proxy:v1.23.6",
- 	"k8s.gcr.io/kube-scheduler:v1.23.6",
- 	"k8s.gcr.io/pause:3.6",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220516230100-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220516230100-2444: exit status 1 (1.1440942s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220516230100-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220516230100-2444 -n newest-cni-20220516230100-2444

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220516230100-2444 -n newest-cni-20220516230100-2444: exit status 7 (2.9261589s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:05:17.339238    8872 status.go:247] status error: host: state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220516230100-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (7.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (4.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20220516230045-2444" does not exist
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context default-k8s-different-port-20220516230045-2444 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220516230045-2444 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (226.1027ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220516230045-2444" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-different-port-20220516230045-2444 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220516230045-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220516230045-2444: exit status 1 (1.1041546s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444: exit status 7 (2.9843821s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:05:15.819661    5292 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220516230045-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (4.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (7.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220516230045-2444 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220516230045-2444 "sudo crictl images -o json": exit status 80 (3.3243579s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_4.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220516230045-2444 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:306: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:306: v1.23.6 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.1-0",
- 	"k8s.gcr.io/kube-apiserver:v1.23.6",
- 	"k8s.gcr.io/kube-controller-manager:v1.23.6",
- 	"k8s.gcr.io/kube-proxy:v1.23.6",
- 	"k8s.gcr.io/kube-scheduler:v1.23.6",
- 	"k8s.gcr.io/pause:3.6",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220516230045-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220516230045-2444: exit status 1 (1.20466s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444: exit status 7 (3.0203814s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:05:23.387471    4668 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220516230045-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (7.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-20220516230100-2444 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p newest-cni-20220516230100-2444 --alsologtostderr -v=1: exit status 80 (3.3215194s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:05:17.589458    1700 out.go:296] Setting OutFile to fd 1488 ...
	I0516 23:05:17.653271    1700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:05:17.653271    1700 out.go:309] Setting ErrFile to fd 1416...
	I0516 23:05:17.653815    1700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:05:17.664730    1700 out.go:303] Setting JSON to false
	I0516 23:05:17.664730    1700 mustload.go:65] Loading cluster: newest-cni-20220516230100-2444
	I0516 23:05:17.667549    1700 config.go:178] Loaded profile config "newest-cni-20220516230100-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:05:17.684989    1700 cli_runner.go:164] Run: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}
	W0516 23:05:20.366352    1700 cli_runner.go:211] docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:20.366352    1700 cli_runner.go:217] Completed: docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: (2.6813396s)
	I0516 23:05:20.369347    1700 out.go:177] 
	W0516 23:05:20.372348    1700 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444
	
	W0516 23:05:20.372348    1700 out.go:239] * 
	* 
	W0516 23:05:20.607409    1700 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_10.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_10.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0516 23:05:20.614429    1700 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p newest-cni-20220516230100-2444 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220516230100-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220516230100-2444: exit status 1 (1.1819741s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220516230100-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220516230100-2444 -n newest-cni-20220516230100-2444

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220516230100-2444 -n newest-cni-20220516230100-2444: exit status 7 (3.0202358s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:05:24.876934    2896 status.go:247] status error: host: state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220516230100-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220516230100-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220516230100-2444: exit status 1 (1.1793962s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220516230100-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220516230100-2444 -n newest-cni-20220516230100-2444

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220516230100-2444 -n newest-cni-20220516230100-2444: exit status 7 (3.1156154s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:05:29.170597    1632 status.go:247] status error: host: state: unknown state "newest-cni-20220516230100-2444": docker container inspect newest-cni-20220516230100-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220516230100-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220516230100-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (11.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (11.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220516230045-2444 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220516230045-2444 --alsologtostderr -v=1: exit status 80 (3.3262143s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:05:23.660122    5148 out.go:296] Setting OutFile to fd 1784 ...
	I0516 23:05:23.726310    5148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:05:23.726310    5148 out.go:309] Setting ErrFile to fd 1540...
	I0516 23:05:23.726310    5148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:05:23.738447    5148 out.go:303] Setting JSON to false
	I0516 23:05:23.738513    5148 mustload.go:65] Loading cluster: default-k8s-different-port-20220516230045-2444
	I0516 23:05:23.739359    5148 config.go:178] Loaded profile config "default-k8s-different-port-20220516230045-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:05:23.757630    5148 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}
	W0516 23:05:26.448394    5148 cli_runner.go:211] docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:05:26.448464    5148 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: (2.6905749s)
	I0516 23:05:26.453642    5148 out.go:177] 
	W0516 23:05:26.455548    5148 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444
	
	W0516 23:05:26.456083    5148 out.go:239] * 
	* 
	W0516 23:05:26.715240    5148 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_10.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_10.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0516 23:05:26.717858    5148 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220516230045-2444 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220516230045-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220516230045-2444: exit status 1 (1.1954415s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444: exit status 7 (3.0256453s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:05:30.962335    7620 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220516230045-2444" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220516230045-2444
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220516230045-2444: exit status 1 (1.1753923s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220516230045-2444 -n default-k8s-different-port-20220516230045-2444: exit status 7 (2.9617119s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 23:05:35.095988    8920 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220516230045-2444": docker container inspect default-k8s-different-port-20220516230045-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220516230045-2444

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220516230045-2444" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (11.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-20220516225309-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kindnet-20220516225309-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: exit status 60 (1m21.5766917s)

                                                
                                                
-- stdout --
	* [kindnet-20220516225309-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kindnet-20220516225309-2444 in cluster kindnet-20220516225309-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "kindnet-20220516225309-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:05:45.927418    6988 out.go:296] Setting OutFile to fd 1664 ...
	I0516 23:05:45.992595    6988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:05:45.992595    6988 out.go:309] Setting ErrFile to fd 1924...
	I0516 23:05:45.992662    6988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:05:46.007809    6988 out.go:303] Setting JSON to false
	I0516 23:05:46.010448    6988 start.go:115] hostinfo: {"hostname":"minikube2","uptime":5458,"bootTime":1652736888,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 23:05:46.010448    6988 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 23:05:46.016100    6988 out.go:177] * [kindnet-20220516225309-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 23:05:46.020086    6988 notify.go:193] Checking for updates...
	I0516 23:05:46.022158    6988 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 23:05:46.024669    6988 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 23:05:46.026658    6988 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 23:05:46.031150    6988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 23:05:46.034142    6988 config.go:178] Loaded profile config "custom-weave-20220516225309-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:05:46.034142    6988 config.go:178] Loaded profile config "enable-default-cni-20220516225301-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:05:46.035418    6988 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:05:46.035418    6988 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 23:05:48.800565    6988 docker.go:137] docker version: linux-20.10.14
	I0516 23:05:48.809263    6988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:05:51.025387    6988 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2154s)
	I0516 23:05:51.026055    6988 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:05:49.9029149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:05:51.029127    6988 out.go:177] * Using the docker driver based on user configuration
	I0516 23:05:51.032360    6988 start.go:284] selected driver: docker
	I0516 23:05:51.032394    6988 start.go:806] validating driver "docker" against <nil>
	I0516 23:05:51.032484    6988 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 23:05:51.125419    6988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:05:53.306442    6988 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1809622s)
	I0516 23:05:53.306704    6988 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:05:52.2118007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:05:53.306975    6988 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 23:05:53.307651    6988 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 23:05:53.315107    6988 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 23:05:53.316724    6988 cni.go:95] Creating CNI manager for "kindnet"
	I0516 23:05:53.316724    6988 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0516 23:05:53.316724    6988 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0516 23:05:53.316724    6988 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0516 23:05:53.316724    6988 start_flags.go:306] config:
	{Name:kindnet-20220516225309-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kindnet-20220516225309-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 23:05:53.319724    6988 out.go:177] * Starting control plane node kindnet-20220516225309-2444 in cluster kindnet-20220516225309-2444
	I0516 23:05:53.322725    6988 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 23:05:53.329720    6988 out.go:177] * Pulling base image ...
	I0516 23:05:53.331708    6988 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 23:05:53.331708    6988 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 23:05:53.331708    6988 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 23:05:53.331708    6988 cache.go:57] Caching tarball of preloaded images
	I0516 23:05:53.332723    6988 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 23:05:53.332723    6988 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 23:05:53.332723    6988 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-20220516225309-2444\config.json ...
	I0516 23:05:53.333728    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-20220516225309-2444\config.json: {Name:mka2ab6ccdff583de04858818063dec0b555bff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 23:05:54.508012    6988 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 23:05:54.508169    6988 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:05:54.508675    6988 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:05:54.508725    6988 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 23:05:54.508922    6988 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 23:05:54.508970    6988 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 23:05:54.509173    6988 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 23:05:54.509235    6988 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 23:05:54.509235    6988 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:05:56.964695    6988 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 23:05:56.964695    6988 cache.go:206] Successfully downloaded all kic artifacts
	I0516 23:05:56.964865    6988 start.go:352] acquiring machines lock for kindnet-20220516225309-2444: {Name:mk510590ff389cd03f80d54f4e38c24f6cd10184 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:05:56.965151    6988 start.go:356] acquired machines lock for "kindnet-20220516225309-2444" in 286.2µs
	I0516 23:05:56.965342    6988 start.go:91] Provisioning new machine with config: &{Name:kindnet-20220516225309-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kindnet-20220516225309-2444 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 23:05:56.965461    6988 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:05:56.969424    6988 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:05:56.970164    6988 start.go:165] libmachine.API.Create for "kindnet-20220516225309-2444" (driver="docker")
	I0516 23:05:56.970164    6988 client.go:168] LocalClient.Create starting
	I0516 23:05:56.970682    6988 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:05:56.970682    6988 main.go:134] libmachine: Decoding PEM data...
	I0516 23:05:56.970682    6988 main.go:134] libmachine: Parsing certificate...
	I0516 23:05:56.971447    6988 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:05:56.971447    6988 main.go:134] libmachine: Decoding PEM data...
	I0516 23:05:56.971447    6988 main.go:134] libmachine: Parsing certificate...
	I0516 23:05:56.986223    6988 cli_runner.go:164] Run: docker network inspect kindnet-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:05:58.091067    6988 cli_runner.go:211] docker network inspect kindnet-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:05:58.091067    6988 cli_runner.go:217] Completed: docker network inspect kindnet-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1048346s)
	I0516 23:05:58.101542    6988 network_create.go:272] running [docker network inspect kindnet-20220516225309-2444] to gather additional debugging logs...
	I0516 23:05:58.101542    6988 cli_runner.go:164] Run: docker network inspect kindnet-20220516225309-2444
	W0516 23:05:59.219946    6988 cli_runner.go:211] docker network inspect kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:05:59.220002    6988 cli_runner.go:217] Completed: docker network inspect kindnet-20220516225309-2444: (1.1182942s)
	I0516 23:05:59.220071    6988 network_create.go:275] error running [docker network inspect kindnet-20220516225309-2444]: docker network inspect kindnet-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220516225309-2444
	I0516 23:05:59.220071    6988 network_create.go:277] output of [docker network inspect kindnet-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220516225309-2444
	
	** /stderr **
	I0516 23:05:59.228777    6988 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:06:00.363870    6988 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1348892s)
	I0516 23:06:00.388286    6988 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000062b8] misses:0}
	I0516 23:06:00.388286    6988 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:00.388286    6988 network_create.go:115] attempt to create docker network kindnet-20220516225309-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 23:06:00.395240    6988 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444
	W0516 23:06:01.518617    6988 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:06:01.518617    6988 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444: (1.1233672s)
	W0516 23:06:01.518617    6988 network_create.go:107] failed to create docker network kindnet-20220516225309-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 23:06:01.542067    6988 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062b8] amended:false}} dirty:map[] misses:0}
	I0516 23:06:01.542186    6988 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:01.563925    6988 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062b8] amended:true}} dirty:map[192.168.49.0:0xc0000062b8 192.168.58.0:0xc0005a4b50] misses:0}
	I0516 23:06:01.563925    6988 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:01.563925    6988 network_create.go:115] attempt to create docker network kindnet-20220516225309-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 23:06:01.573951    6988 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444
	W0516 23:06:02.710969    6988 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:06:02.710969    6988 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444: (1.1370084s)
	W0516 23:06:02.710969    6988 network_create.go:107] failed to create docker network kindnet-20220516225309-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 23:06:02.728924    6988 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062b8] amended:true}} dirty:map[192.168.49.0:0xc0000062b8 192.168.58.0:0xc0005a4b50] misses:1}
	I0516 23:06:02.728924    6988 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:02.738912    6988 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062b8] amended:true}} dirty:map[192.168.49.0:0xc0000062b8 192.168.58.0:0xc0005a4b50 192.168.67.0:0xc000006640] misses:1}
	I0516 23:06:02.738912    6988 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:02.738912    6988 network_create.go:115] attempt to create docker network kindnet-20220516225309-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 23:06:02.763442    6988 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444
	W0516 23:06:03.913715    6988 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:06:03.913715    6988 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444: (1.1502626s)
	W0516 23:06:03.913715    6988 network_create.go:107] failed to create docker network kindnet-20220516225309-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 23:06:03.936680    6988 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062b8] amended:true}} dirty:map[192.168.49.0:0xc0000062b8 192.168.58.0:0xc0005a4b50 192.168.67.0:0xc000006640] misses:2}
	I0516 23:06:03.936680    6988 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:03.954717    6988 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062b8] amended:true}} dirty:map[192.168.49.0:0xc0000062b8 192.168.58.0:0xc0005a4b50 192.168.67.0:0xc000006640 192.168.76.0:0xc0005a4be8] misses:2}
	I0516 23:06:03.954717    6988 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:03.954717    6988 network_create.go:115] attempt to create docker network kindnet-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 23:06:03.961723    6988 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444
	W0516 23:06:05.074685    6988 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:06:05.074685    6988 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444: (1.112952s)
	E0516 23:06:05.074685    6988 network_create.go:104] error while trying to create docker network kindnet-20220516225309-2444 192.168.76.0/24: create docker network kindnet-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8a4ed9cb291774b34280b25be78429374b0934e76138cc815d117d965e8f3f8f (br-8a4ed9cb2917): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 23:06:05.074685    6988 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kindnet-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8a4ed9cb291774b34280b25be78429374b0934e76138cc815d117d965e8f3f8f (br-8a4ed9cb2917): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kindnet-20220516225309-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8a4ed9cb291774b34280b25be78429374b0934e76138cc815d117d965e8f3f8f (br-8a4ed9cb2917): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 23:06:05.089759    6988 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:06:06.231929    6988 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1421603s)
	I0516 23:06:06.238935    6988 cli_runner.go:164] Run: docker volume create kindnet-20220516225309-2444 --label name.minikube.sigs.k8s.io=kindnet-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:06:07.370678    6988 cli_runner.go:211] docker volume create kindnet-20220516225309-2444 --label name.minikube.sigs.k8s.io=kindnet-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:06:07.370678    6988 cli_runner.go:217] Completed: docker volume create kindnet-20220516225309-2444 --label name.minikube.sigs.k8s.io=kindnet-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: (1.1317335s)
	I0516 23:06:07.370678    6988 client.go:171] LocalClient.Create took 10.4003494s
	I0516 23:06:09.401232    6988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:06:09.407989    6988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444
	W0516 23:06:10.573560    6988 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:06:10.573560    6988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: (1.1655609s)
	I0516 23:06:10.573560    6988 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:10.868801    6988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444
	W0516 23:06:12.044652    6988 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:06:12.044652    6988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: (1.1758407s)
	W0516 23:06:12.044652    6988 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	
	W0516 23:06:12.044652    6988 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:12.059911    6988 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:06:12.075162    6988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444
	W0516 23:06:13.188317    6988 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:06:13.188317    6988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: (1.112914s)
	I0516 23:06:13.188317    6988 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:13.501650    6988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444
	W0516 23:06:14.635357    6988 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:06:14.635357    6988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: (1.1335357s)
	W0516 23:06:14.635357    6988 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	
	W0516 23:06:14.635357    6988 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:14.635357    6988 start.go:134] duration metric: createHost completed in 17.6697406s
	I0516 23:06:14.635357    6988 start.go:81] releasing machines lock for "kindnet-20220516225309-2444", held for 17.6700221s
	W0516 23:06:14.635357    6988 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for kindnet-20220516225309-2444 container: docker volume create kindnet-20220516225309-2444 --label name.minikube.sigs.k8s.io=kindnet-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220516225309-2444': mkdir /var/lib/docker/volumes/kindnet-20220516225309-2444: read-only file system
	I0516 23:06:14.652645    6988 cli_runner.go:164] Run: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}
	W0516 23:06:15.749588    6988 cli_runner.go:211] docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:15.749588    6988 cli_runner.go:217] Completed: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: (1.0968688s)
	I0516 23:06:15.749588    6988 delete.go:82] Unable to get host status for kindnet-20220516225309-2444, assuming it has already been deleted: state: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	W0516 23:06:15.749588    6988 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kindnet-20220516225309-2444 container: docker volume create kindnet-20220516225309-2444 --label name.minikube.sigs.k8s.io=kindnet-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220516225309-2444': mkdir /var/lib/docker/volumes/kindnet-20220516225309-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kindnet-20220516225309-2444 container: docker volume create kindnet-20220516225309-2444 --label name.minikube.sigs.k8s.io=kindnet-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220516225309-2444': mkdir /var/lib/docker/volumes/kindnet-20220516225309-2444: read-only file system
	
	I0516 23:06:15.749588    6988 start.go:623] Will try again in 5 seconds ...
	I0516 23:06:20.759351    6988 start.go:352] acquiring machines lock for kindnet-20220516225309-2444: {Name:mk510590ff389cd03f80d54f4e38c24f6cd10184 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:06:20.759651    6988 start.go:356] acquired machines lock for "kindnet-20220516225309-2444" in 186.5µs
	I0516 23:06:20.759705    6988 start.go:94] Skipping create...Using existing machine configuration
	I0516 23:06:20.759705    6988 fix.go:55] fixHost starting: 
	I0516 23:06:20.775796    6988 cli_runner.go:164] Run: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}
	W0516 23:06:21.851648    6988 cli_runner.go:211] docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:21.851698    6988 cli_runner.go:217] Completed: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: (1.0757344s)
	I0516 23:06:21.851788    6988 fix.go:103] recreateIfNeeded on kindnet-20220516225309-2444: state= err=unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:21.851916    6988 fix.go:108] machineExists: false. err=machine does not exist
	I0516 23:06:21.855003    6988 out.go:177] * docker "kindnet-20220516225309-2444" container is missing, will recreate.
	I0516 23:06:21.856989    6988 delete.go:124] DEMOLISHING kindnet-20220516225309-2444 ...
	I0516 23:06:21.871699    6988 cli_runner.go:164] Run: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}
	W0516 23:06:22.909612    6988 cli_runner.go:211] docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:22.909709    6988 cli_runner.go:217] Completed: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: (1.0379041s)
	W0516 23:06:22.909793    6988 stop.go:75] unable to get state: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:22.909836    6988 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:22.925124    6988 cli_runner.go:164] Run: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}
	W0516 23:06:24.015143    6988 cli_runner.go:211] docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:24.015143    6988 cli_runner.go:217] Completed: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: (1.0898555s)
	I0516 23:06:24.015143    6988 delete.go:82] Unable to get host status for kindnet-20220516225309-2444, assuming it has already been deleted: state: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:24.026611    6988 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kindnet-20220516225309-2444
	W0516 23:06:25.078369    6988 cli_runner.go:211] docker container inspect -f {{.Id}} kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:06:25.078369    6988 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kindnet-20220516225309-2444: (1.0516764s)
	I0516 23:06:25.078369    6988 kic.go:356] could not find the container kindnet-20220516225309-2444 to remove it. will try anyways
	I0516 23:06:25.088568    6988 cli_runner.go:164] Run: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}
	W0516 23:06:26.121475    6988 cli_runner.go:211] docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:26.121510    6988 cli_runner.go:217] Completed: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: (1.0327439s)
	W0516 23:06:26.121611    6988 oci.go:84] error getting container status, will try to delete anyways: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:26.129774    6988 cli_runner.go:164] Run: docker exec --privileged -t kindnet-20220516225309-2444 /bin/bash -c "sudo init 0"
	W0516 23:06:27.198863    6988 cli_runner.go:211] docker exec --privileged -t kindnet-20220516225309-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 23:06:27.198863    6988 cli_runner.go:217] Completed: docker exec --privileged -t kindnet-20220516225309-2444 /bin/bash -c "sudo init 0": (1.0690792s)
	I0516 23:06:27.198863    6988 oci.go:641] error shutdown kindnet-20220516225309-2444: docker exec --privileged -t kindnet-20220516225309-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:28.210710    6988 cli_runner.go:164] Run: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}
	W0516 23:06:29.259464    6988 cli_runner.go:211] docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:29.259464    6988 cli_runner.go:217] Completed: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: (1.0487447s)
	I0516 23:06:29.259464    6988 oci.go:653] temporary error verifying shutdown: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:29.259464    6988 oci.go:655] temporary error: container kindnet-20220516225309-2444 status is  but expect it to be exited
	I0516 23:06:29.259464    6988 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:29.741709    6988 cli_runner.go:164] Run: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}
	W0516 23:06:30.837092    6988 cli_runner.go:211] docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:30.837092    6988 cli_runner.go:217] Completed: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: (1.0953732s)
	I0516 23:06:30.837092    6988 oci.go:653] temporary error verifying shutdown: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:30.837092    6988 oci.go:655] temporary error: container kindnet-20220516225309-2444 status is  but expect it to be exited
	I0516 23:06:30.837092    6988 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:31.748495    6988 cli_runner.go:164] Run: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}
	W0516 23:06:32.836741    6988 cli_runner.go:211] docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:32.836838    6988 cli_runner.go:217] Completed: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: (1.0880612s)
	I0516 23:06:32.837016    6988 oci.go:653] temporary error verifying shutdown: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:32.837098    6988 oci.go:655] temporary error: container kindnet-20220516225309-2444 status is  but expect it to be exited
	I0516 23:06:32.837128    6988 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:33.497817    6988 cli_runner.go:164] Run: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}
	W0516 23:06:34.563123    6988 cli_runner.go:211] docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:34.563123    6988 cli_runner.go:217] Completed: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: (1.0652968s)
	I0516 23:06:34.563123    6988 oci.go:653] temporary error verifying shutdown: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:34.563123    6988 oci.go:655] temporary error: container kindnet-20220516225309-2444 status is  but expect it to be exited
	I0516 23:06:34.563123    6988 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:35.688630    6988 cli_runner.go:164] Run: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}
	W0516 23:06:36.749780    6988 cli_runner.go:211] docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:36.749780    6988 cli_runner.go:217] Completed: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: (1.0611406s)
	I0516 23:06:36.750019    6988 oci.go:653] temporary error verifying shutdown: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:36.750019    6988 oci.go:655] temporary error: container kindnet-20220516225309-2444 status is  but expect it to be exited
	I0516 23:06:36.750094    6988 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:38.281075    6988 cli_runner.go:164] Run: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}
	W0516 23:06:39.358232    6988 cli_runner.go:211] docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:39.358232    6988 cli_runner.go:217] Completed: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: (1.0771476s)
	I0516 23:06:39.358232    6988 oci.go:653] temporary error verifying shutdown: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:39.358232    6988 oci.go:655] temporary error: container kindnet-20220516225309-2444 status is  but expect it to be exited
	I0516 23:06:39.358232    6988 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:42.416285    6988 cli_runner.go:164] Run: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}
	W0516 23:06:43.454138    6988 cli_runner.go:211] docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:43.454369    6988 cli_runner.go:217] Completed: docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: (1.0378436s)
	I0516 23:06:43.454369    6988 oci.go:653] temporary error verifying shutdown: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:43.454369    6988 oci.go:655] temporary error: container kindnet-20220516225309-2444 status is  but expect it to be exited
	I0516 23:06:43.454369    6988 oci.go:88] couldn't shut down kindnet-20220516225309-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kindnet-20220516225309-2444": docker container inspect kindnet-20220516225309-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	 
	I0516 23:06:43.462563    6988 cli_runner.go:164] Run: docker rm -f -v kindnet-20220516225309-2444
	I0516 23:06:44.529376    6988 cli_runner.go:217] Completed: docker rm -f -v kindnet-20220516225309-2444: (1.0665182s)
	I0516 23:06:44.537643    6988 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kindnet-20220516225309-2444
	W0516 23:06:45.556949    6988 cli_runner.go:211] docker container inspect -f {{.Id}} kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:06:45.556949    6988 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kindnet-20220516225309-2444: (1.0192967s)
	I0516 23:06:45.565172    6988 cli_runner.go:164] Run: docker network inspect kindnet-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:06:46.584167    6988 cli_runner.go:211] docker network inspect kindnet-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:06:46.584167    6988 cli_runner.go:217] Completed: docker network inspect kindnet-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0189858s)
	I0516 23:06:46.593166    6988 network_create.go:272] running [docker network inspect kindnet-20220516225309-2444] to gather additional debugging logs...
	I0516 23:06:46.593166    6988 cli_runner.go:164] Run: docker network inspect kindnet-20220516225309-2444
	W0516 23:06:47.603549    6988 cli_runner.go:211] docker network inspect kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:06:47.603699    6988 cli_runner.go:217] Completed: docker network inspect kindnet-20220516225309-2444: (1.0101984s)
	I0516 23:06:47.603699    6988 network_create.go:275] error running [docker network inspect kindnet-20220516225309-2444]: docker network inspect kindnet-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220516225309-2444
	I0516 23:06:47.603699    6988 network_create.go:277] output of [docker network inspect kindnet-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220516225309-2444
	
	** /stderr **
	W0516 23:06:47.604480    6988 delete.go:139] delete failed (probably ok) <nil>
	I0516 23:06:47.604480    6988 fix.go:115] Sleeping 1 second for extra luck!
	I0516 23:06:48.610012    6988 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:06:48.615548    6988 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:06:48.615548    6988 start.go:165] libmachine.API.Create for "kindnet-20220516225309-2444" (driver="docker")
	I0516 23:06:48.615548    6988 client.go:168] LocalClient.Create starting
	I0516 23:06:48.616228    6988 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:06:48.616228    6988 main.go:134] libmachine: Decoding PEM data...
	I0516 23:06:48.616228    6988 main.go:134] libmachine: Parsing certificate...
	I0516 23:06:48.616228    6988 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:06:48.616228    6988 main.go:134] libmachine: Decoding PEM data...
	I0516 23:06:48.616228    6988 main.go:134] libmachine: Parsing certificate...
	I0516 23:06:48.625588    6988 cli_runner.go:164] Run: docker network inspect kindnet-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:06:49.688715    6988 cli_runner.go:211] docker network inspect kindnet-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:06:49.688715    6988 cli_runner.go:217] Completed: docker network inspect kindnet-20220516225309-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0631173s)
	I0516 23:06:49.696502    6988 network_create.go:272] running [docker network inspect kindnet-20220516225309-2444] to gather additional debugging logs...
	I0516 23:06:49.696502    6988 cli_runner.go:164] Run: docker network inspect kindnet-20220516225309-2444
	W0516 23:06:50.739927    6988 cli_runner.go:211] docker network inspect kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:06:50.739927    6988 cli_runner.go:217] Completed: docker network inspect kindnet-20220516225309-2444: (1.0433698s)
	I0516 23:06:50.739927    6988 network_create.go:275] error running [docker network inspect kindnet-20220516225309-2444]: docker network inspect kindnet-20220516225309-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220516225309-2444
	I0516 23:06:50.739927    6988 network_create.go:277] output of [docker network inspect kindnet-20220516225309-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220516225309-2444
	
	** /stderr **
	I0516 23:06:50.748815    6988 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:06:51.830741    6988 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0819166s)
	I0516 23:06:51.845712    6988 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062b8] amended:true}} dirty:map[192.168.49.0:0xc0000062b8 192.168.58.0:0xc0005a4b50 192.168.67.0:0xc000006640 192.168.76.0:0xc0005a4be8] misses:2}
	I0516 23:06:51.845712    6988 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:51.873492    6988 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062b8] amended:true}} dirty:map[192.168.49.0:0xc0000062b8 192.168.58.0:0xc0005a4b50 192.168.67.0:0xc000006640 192.168.76.0:0xc0005a4be8] misses:3}
	I0516 23:06:51.874062    6988 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:51.890496    6988 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062b8 192.168.58.0:0xc0005a4b50 192.168.67.0:0xc000006640 192.168.76.0:0xc0005a4be8] amended:false}} dirty:map[] misses:0}
	I0516 23:06:51.890496    6988 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:51.909044    6988 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062b8 192.168.58.0:0xc0005a4b50 192.168.67.0:0xc000006640 192.168.76.0:0xc0005a4be8] amended:false}} dirty:map[] misses:0}
	I0516 23:06:51.909044    6988 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:51.926447    6988 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062b8 192.168.58.0:0xc0005a4b50 192.168.67.0:0xc000006640 192.168.76.0:0xc0005a4be8] amended:true}} dirty:map[192.168.49.0:0xc0000062b8 192.168.58.0:0xc0005a4b50 192.168.67.0:0xc000006640 192.168.76.0:0xc0005a4be8 192.168.85.0:0xc000350b48] misses:0}
	I0516 23:06:51.926447    6988 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:51.926447    6988 network_create.go:115] attempt to create docker network kindnet-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 23:06:51.936504    6988 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444
	W0516 23:06:53.006314    6988 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:06:53.006314    6988 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444: (1.0698001s)
	E0516 23:06:53.006314    6988 network_create.go:104] error while trying to create docker network kindnet-20220516225309-2444 192.168.85.0/24: create docker network kindnet-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4137a116cbaed57b97fcdb6de10ce1feddf24e5829d8bad4b085e864cc697fa9 (br-4137a116cbae): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 23:06:53.006314    6988 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kindnet-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4137a116cbaed57b97fcdb6de10ce1feddf24e5829d8bad4b085e864cc697fa9 (br-4137a116cbae): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kindnet-20220516225309-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220516225309-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4137a116cbaed57b97fcdb6de10ce1feddf24e5829d8bad4b085e864cc697fa9 (br-4137a116cbae): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 23:06:53.022278    6988 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:06:54.111135    6988 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0888481s)
	I0516 23:06:54.119121    6988 cli_runner.go:164] Run: docker volume create kindnet-20220516225309-2444 --label name.minikube.sigs.k8s.io=kindnet-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:06:55.245233    6988 cli_runner.go:211] docker volume create kindnet-20220516225309-2444 --label name.minikube.sigs.k8s.io=kindnet-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:06:55.245233    6988 cli_runner.go:217] Completed: docker volume create kindnet-20220516225309-2444 --label name.minikube.sigs.k8s.io=kindnet-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: (1.1260749s)
	I0516 23:06:55.245233    6988 client.go:171] LocalClient.Create took 6.6296261s
	I0516 23:06:57.261650    6988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:06:57.264466    6988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444
	W0516 23:06:58.334466    6988 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:06:58.334466    6988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: (1.0699905s)
	I0516 23:06:58.334466    6988 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:58.684904    6988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444
	W0516 23:06:59.796053    6988 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:06:59.796108    6988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: (1.110897s)
	W0516 23:06:59.796108    6988 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	
	W0516 23:06:59.796108    6988 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:06:59.806311    6988 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:06:59.813308    6988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444
	W0516 23:07:00.920450    6988 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:07:00.920657    6988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: (1.1069236s)
	I0516 23:07:00.920657    6988 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:07:01.152102    6988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444
	W0516 23:07:02.258267    6988 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:07:02.258267    6988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: (1.1061555s)
	W0516 23:07:02.258267    6988 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	
	W0516 23:07:02.258267    6988 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:07:02.258267    6988 start.go:134] duration metric: createHost completed in 13.6478467s
	I0516 23:07:02.270216    6988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:07:02.277318    6988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444
	W0516 23:07:03.379173    6988 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:07:03.379341    6988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: (1.1018164s)
	I0516 23:07:03.379446    6988 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:07:03.642912    6988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444
	W0516 23:07:04.756824    6988 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:07:04.756855    6988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: (1.1137991s)
	W0516 23:07:04.757149    6988 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	
	W0516 23:07:04.757178    6988 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:07:04.768873    6988 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:07:04.775973    6988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444
	W0516 23:07:05.903625    6988 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:07:05.903751    6988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: (1.1276136s)
	I0516 23:07:05.903782    6988 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:07:06.117701    6988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444
	W0516 23:07:07.208870    6988 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444 returned with exit code 1
	I0516 23:07:07.208949    6988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: (1.0909631s)
	W0516 23:07:07.209186    6988 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	
	W0516 23:07:07.209256    6988 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220516225309-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220516225309-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220516225309-2444
	I0516 23:07:07.209285    6988 fix.go:57] fixHost completed within 46.449138s
	I0516 23:07:07.209308    6988 start.go:81] releasing machines lock for "kindnet-20220516225309-2444", held for 46.4492143s
	W0516 23:07:07.209839    6988 out.go:239] * Failed to start docker container. Running "minikube delete -p kindnet-20220516225309-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kindnet-20220516225309-2444 container: docker volume create kindnet-20220516225309-2444 --label name.minikube.sigs.k8s.io=kindnet-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220516225309-2444': mkdir /var/lib/docker/volumes/kindnet-20220516225309-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p kindnet-20220516225309-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kindnet-20220516225309-2444 container: docker volume create kindnet-20220516225309-2444 --label name.minikube.sigs.k8s.io=kindnet-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220516225309-2444': mkdir /var/lib/docker/volumes/kindnet-20220516225309-2444: read-only file system
	
	I0516 23:07:07.214783    6988 out.go:177] 
	W0516 23:07:07.217138    6988 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kindnet-20220516225309-2444 container: docker volume create kindnet-20220516225309-2444 --label name.minikube.sigs.k8s.io=kindnet-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220516225309-2444': mkdir /var/lib/docker/volumes/kindnet-20220516225309-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kindnet-20220516225309-2444 container: docker volume create kindnet-20220516225309-2444 --label name.minikube.sigs.k8s.io=kindnet-20220516225309-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220516225309-2444: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220516225309-2444': mkdir /var/lib/docker/volumes/kindnet-20220516225309-2444: read-only file system
	
	W0516 23:07:07.217138    6988 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 23:07:07.217138    6988 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 23:07:07.220352    6988 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/kindnet/Start (81.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (81.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-20220516225301-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p bridge-20220516225301-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: exit status 60 (1m21.06701s)

                                                
                                                
-- stdout --
	* [bridge-20220516225301-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node bridge-20220516225301-2444 in cluster bridge-20220516225301-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "bridge-20220516225301-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:05:51.840618    5108 out.go:296] Setting OutFile to fd 1788 ...
	I0516 23:05:51.901285    5108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:05:51.901285    5108 out.go:309] Setting ErrFile to fd 1912...
	I0516 23:05:51.901285    5108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:05:51.912711    5108 out.go:303] Setting JSON to false
	I0516 23:05:51.915512    5108 start.go:115] hostinfo: {"hostname":"minikube2","uptime":5464,"bootTime":1652736887,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 23:05:51.915512    5108 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 23:05:51.921128    5108 out.go:177] * [bridge-20220516225301-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 23:05:51.924150    5108 notify.go:193] Checking for updates...
	I0516 23:05:51.926223    5108 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 23:05:51.929359    5108 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 23:05:51.933778    5108 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 23:05:51.938366    5108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 23:05:51.944937    5108 config.go:178] Loaded profile config "custom-weave-20220516225309-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:05:51.944937    5108 config.go:178] Loaded profile config "enable-default-cni-20220516225301-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:05:51.945647    5108 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:05:51.945647    5108 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 23:05:54.772969    5108 docker.go:137] docker version: linux-20.10.14
	I0516 23:05:54.780611    5108 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:05:56.993704    5108 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2130373s)
	I0516 23:05:56.994409    5108 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-05-16 23:05:55.8803037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:05:56.998969    5108 out.go:177] * Using the docker driver based on user configuration
	I0516 23:05:57.001686    5108 start.go:284] selected driver: docker
	I0516 23:05:57.001686    5108 start.go:806] validating driver "docker" against <nil>
	I0516 23:05:57.001735    5108 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 23:05:57.076701    5108 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:05:59.265977    5108 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1892575s)
	I0516 23:05:59.266463    5108 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:05:58.1346267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:05:59.266780    5108 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 23:05:59.267541    5108 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 23:05:59.271310    5108 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 23:05:59.273732    5108 cni.go:95] Creating CNI manager for "bridge"
	I0516 23:05:59.273732    5108 start_flags.go:301] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0516 23:05:59.273732    5108 start_flags.go:306] config:
	{Name:bridge-20220516225301-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:bridge-20220516225301-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 23:05:59.276363    5108 out.go:177] * Starting control plane node bridge-20220516225301-2444 in cluster bridge-20220516225301-2444
	I0516 23:05:59.280403    5108 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 23:05:59.282390    5108 out.go:177] * Pulling base image ...
	I0516 23:05:59.285414    5108 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 23:05:59.285414    5108 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 23:05:59.285414    5108 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 23:05:59.285414    5108 cache.go:57] Caching tarball of preloaded images
	I0516 23:05:59.286345    5108 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 23:05:59.286345    5108 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 23:05:59.286345    5108 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\bridge-20220516225301-2444\config.json ...
	I0516 23:05:59.286345    5108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\bridge-20220516225301-2444\config.json: {Name:mk3920e2f3290671778cb677734f7484ca5b81e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 23:06:00.411250    5108 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 23:06:00.411250    5108 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:06:00.411250    5108 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:06:00.411250    5108 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 23:06:00.411250    5108 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 23:06:00.411250    5108 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 23:06:00.411250    5108 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 23:06:00.411250    5108 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 23:06:00.411250    5108 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:06:02.780444    5108 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 23:06:02.780444    5108 cache.go:206] Successfully downloaded all kic artifacts
	I0516 23:06:02.780444    5108 start.go:352] acquiring machines lock for bridge-20220516225301-2444: {Name:mk964d068432215e309d13b4685d4142537c947c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:06:02.781431    5108 start.go:356] acquired machines lock for "bridge-20220516225301-2444" in 0s
	I0516 23:06:02.781431    5108 start.go:91] Provisioning new machine with config: &{Name:bridge-20220516225301-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:bridge-20220516225301-2444 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 23:06:02.781431    5108 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:06:02.785452    5108 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:06:02.785452    5108 start.go:165] libmachine.API.Create for "bridge-20220516225301-2444" (driver="docker")
	I0516 23:06:02.786450    5108 client.go:168] LocalClient.Create starting
	I0516 23:06:02.786450    5108 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:06:02.786450    5108 main.go:134] libmachine: Decoding PEM data...
	I0516 23:06:02.786450    5108 main.go:134] libmachine: Parsing certificate...
	I0516 23:06:02.787473    5108 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:06:02.787473    5108 main.go:134] libmachine: Decoding PEM data...
	I0516 23:06:02.787473    5108 main.go:134] libmachine: Parsing certificate...
	I0516 23:06:02.797437    5108 cli_runner.go:164] Run: docker network inspect bridge-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:06:03.929681    5108 cli_runner.go:211] docker network inspect bridge-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:06:03.929681    5108 cli_runner.go:217] Completed: docker network inspect bridge-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1322339s)
	I0516 23:06:03.938678    5108 network_create.go:272] running [docker network inspect bridge-20220516225301-2444] to gather additional debugging logs...
	I0516 23:06:03.938678    5108 cli_runner.go:164] Run: docker network inspect bridge-20220516225301-2444
	W0516 23:06:05.090740    5108 cli_runner.go:211] docker network inspect bridge-20220516225301-2444 returned with exit code 1
	I0516 23:06:05.090740    5108 cli_runner.go:217] Completed: docker network inspect bridge-20220516225301-2444: (1.1520518s)
	I0516 23:06:05.090740    5108 network_create.go:275] error running [docker network inspect bridge-20220516225301-2444]: docker network inspect bridge-20220516225301-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20220516225301-2444
	I0516 23:06:05.090740    5108 network_create.go:277] output of [docker network inspect bridge-20220516225301-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20220516225301-2444
	
	** /stderr **
	I0516 23:06:05.098589    5108 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:06:06.216220    5108 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1172499s)
	I0516 23:06:06.239933    5108 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006240] misses:0}
	I0516 23:06:06.239933    5108 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:06.239933    5108 network_create.go:115] attempt to create docker network bridge-20220516225301-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 23:06:06.247927    5108 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444
	W0516 23:06:07.386800    5108 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444 returned with exit code 1
	I0516 23:06:07.386852    5108 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444: (1.1387488s)
	W0516 23:06:07.386852    5108 network_create.go:107] failed to create docker network bridge-20220516225301-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 23:06:07.406875    5108 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006240] amended:false}} dirty:map[] misses:0}
	I0516 23:06:07.407012    5108 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:07.426502    5108 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006240] amended:true}} dirty:map[192.168.49.0:0xc000006240 192.168.58.0:0xc000534ab8] misses:0}
	I0516 23:06:07.426502    5108 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:07.426502    5108 network_create.go:115] attempt to create docker network bridge-20220516225301-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 23:06:07.435962    5108 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444
	W0516 23:06:08.491236    5108 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444 returned with exit code 1
	I0516 23:06:08.492276    5108 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444: (1.0552646s)
	W0516 23:06:08.492493    5108 network_create.go:107] failed to create docker network bridge-20220516225301-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 23:06:08.513492    5108 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006240] amended:true}} dirty:map[192.168.49.0:0xc000006240 192.168.58.0:0xc000534ab8] misses:1}
	I0516 23:06:08.513492    5108 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:08.533616    5108 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006240] amended:true}} dirty:map[192.168.49.0:0xc000006240 192.168.58.0:0xc000534ab8 192.168.67.0:0xc0007824f0] misses:1}
	I0516 23:06:08.533616    5108 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:08.533616    5108 network_create.go:115] attempt to create docker network bridge-20220516225301-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 23:06:08.544998    5108 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444
	W0516 23:06:09.657382    5108 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444 returned with exit code 1
	I0516 23:06:09.657445    5108 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444: (1.1122138s)
	W0516 23:06:09.657445    5108 network_create.go:107] failed to create docker network bridge-20220516225301-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 23:06:09.681180    5108 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006240] amended:true}} dirty:map[192.168.49.0:0xc000006240 192.168.58.0:0xc000534ab8 192.168.67.0:0xc0007824f0] misses:2}
	I0516 23:06:09.681418    5108 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:09.701010    5108 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006240] amended:true}} dirty:map[192.168.49.0:0xc000006240 192.168.58.0:0xc000534ab8 192.168.67.0:0xc0007824f0 192.168.76.0:0xc000534b50] misses:2}
	I0516 23:06:09.702009    5108 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:09.702009    5108 network_create.go:115] attempt to create docker network bridge-20220516225301-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 23:06:09.714669    5108 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444
	W0516 23:06:10.874647    5108 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444 returned with exit code 1
	I0516 23:06:10.874647    5108 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444: (1.1599104s)
	E0516 23:06:10.874647    5108 network_create.go:104] error while trying to create docker network bridge-20220516225301-2444 192.168.76.0/24: create docker network bridge-20220516225301-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2e40b626c702f8079dde6d934479e24cbd32915921afd1ae25d7251cb2b6e264 (br-2e40b626c702): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 23:06:10.874647    5108 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network bridge-20220516225301-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2e40b626c702f8079dde6d934479e24cbd32915921afd1ae25d7251cb2b6e264 (br-2e40b626c702): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network bridge-20220516225301-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2e40b626c702f8079dde6d934479e24cbd32915921afd1ae25d7251cb2b6e264 (br-2e40b626c702): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 23:06:10.896464    5108 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:06:12.075162    5108 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1785778s)
	I0516 23:06:12.082098    5108 cli_runner.go:164] Run: docker volume create bridge-20220516225301-2444 --label name.minikube.sigs.k8s.io=bridge-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:06:13.204022    5108 cli_runner.go:211] docker volume create bridge-20220516225301-2444 --label name.minikube.sigs.k8s.io=bridge-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:06:13.204181    5108 cli_runner.go:217] Completed: docker volume create bridge-20220516225301-2444 --label name.minikube.sigs.k8s.io=bridge-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: (1.1219146s)
	I0516 23:06:13.204215    5108 client.go:171] LocalClient.Create took 10.4176733s
	I0516 23:06:15.229848    5108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:06:15.236988    5108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444
	W0516 23:06:16.354798    5108 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444 returned with exit code 1
	I0516 23:06:16.354845    5108 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: (1.1176341s)
	I0516 23:06:16.355014    5108 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:16.650738    5108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444
	W0516 23:06:17.683292    5108 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444 returned with exit code 1
	I0516 23:06:17.683292    5108 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: (1.0324153s)
	W0516 23:06:17.683292    5108 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	
	W0516 23:06:17.683292    5108 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:17.695089    5108 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:06:17.703717    5108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444
	W0516 23:06:18.727254    5108 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444 returned with exit code 1
	I0516 23:06:18.727420    5108 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: (1.0235281s)
	I0516 23:06:18.727420    5108 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:19.030195    5108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444
	W0516 23:06:20.099310    5108 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444 returned with exit code 1
	I0516 23:06:20.099461    5108 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: (1.0688872s)
	W0516 23:06:20.099519    5108 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	
	W0516 23:06:20.099519    5108 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:20.099519    5108 start.go:134] duration metric: createHost completed in 17.3179358s
	I0516 23:06:20.099519    5108 start.go:81] releasing machines lock for "bridge-20220516225301-2444", held for 17.3179358s
	W0516 23:06:20.099519    5108 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for bridge-20220516225301-2444 container: docker volume create bridge-20220516225301-2444 --label name.minikube.sigs.k8s.io=bridge-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/bridge-20220516225301-2444': mkdir /var/lib/docker/volumes/bridge-20220516225301-2444: read-only file system
	I0516 23:06:20.116636    5108 cli_runner.go:164] Run: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:21.181657    5108 cli_runner.go:211] docker container inspect bridge-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:21.181657    5108 cli_runner.go:217] Completed: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: (1.0648756s)
	I0516 23:06:21.181657    5108 delete.go:82] Unable to get host status for bridge-20220516225301-2444, assuming it has already been deleted: state: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	W0516 23:06:21.181657    5108 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for bridge-20220516225301-2444 container: docker volume create bridge-20220516225301-2444 --label name.minikube.sigs.k8s.io=bridge-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/bridge-20220516225301-2444': mkdir /var/lib/docker/volumes/bridge-20220516225301-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for bridge-20220516225301-2444 container: docker volume create bridge-20220516225301-2444 --label name.minikube.sigs.k8s.io=bridge-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/bridge-20220516225301-2444': mkdir /var/lib/docker/volumes/bridge-20220516225301-2444: read-only file system
	
	I0516 23:06:21.181657    5108 start.go:623] Will try again in 5 seconds ...
	I0516 23:06:26.183751    5108 start.go:352] acquiring machines lock for bridge-20220516225301-2444: {Name:mk964d068432215e309d13b4685d4142537c947c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:06:26.183751    5108 start.go:356] acquired machines lock for "bridge-20220516225301-2444" in 0s
	I0516 23:06:26.183751    5108 start.go:94] Skipping create...Using existing machine configuration
	I0516 23:06:26.183751    5108 fix.go:55] fixHost starting: 
	I0516 23:06:26.198382    5108 cli_runner.go:164] Run: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:27.230853    5108 cli_runner.go:211] docker container inspect bridge-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:27.231028    5108 cli_runner.go:217] Completed: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: (1.0323097s)
	I0516 23:06:27.231102    5108 fix.go:103] recreateIfNeeded on bridge-20220516225301-2444: state= err=unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:27.231102    5108 fix.go:108] machineExists: false. err=machine does not exist
	I0516 23:06:27.236131    5108 out.go:177] * docker "bridge-20220516225301-2444" container is missing, will recreate.
	I0516 23:06:27.238335    5108 delete.go:124] DEMOLISHING bridge-20220516225301-2444 ...
	I0516 23:06:27.254880    5108 cli_runner.go:164] Run: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:28.278202    5108 cli_runner.go:211] docker container inspect bridge-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:28.278202    5108 cli_runner.go:217] Completed: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: (1.0233134s)
	W0516 23:06:28.278202    5108 stop.go:75] unable to get state: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:28.278202    5108 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:28.293212    5108 cli_runner.go:164] Run: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:29.337189    5108 cli_runner.go:211] docker container inspect bridge-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:29.337240    5108 cli_runner.go:217] Completed: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: (1.0436909s)
	I0516 23:06:29.337240    5108 delete.go:82] Unable to get host status for bridge-20220516225301-2444, assuming it has already been deleted: state: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:29.345084    5108 cli_runner.go:164] Run: docker container inspect -f {{.Id}} bridge-20220516225301-2444
	W0516 23:06:30.394994    5108 cli_runner.go:211] docker container inspect -f {{.Id}} bridge-20220516225301-2444 returned with exit code 1
	I0516 23:06:30.394994    5108 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} bridge-20220516225301-2444: (1.0499006s)
	I0516 23:06:30.394994    5108 kic.go:356] could not find the container bridge-20220516225301-2444 to remove it. will try anyways
	I0516 23:06:30.404550    5108 cli_runner.go:164] Run: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:31.459958    5108 cli_runner.go:211] docker container inspect bridge-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:31.460114    5108 cli_runner.go:217] Completed: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: (1.055398s)
	W0516 23:06:31.460175    5108 oci.go:84] error getting container status, will try to delete anyways: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:31.469093    5108 cli_runner.go:164] Run: docker exec --privileged -t bridge-20220516225301-2444 /bin/bash -c "sudo init 0"
	W0516 23:06:32.603422    5108 cli_runner.go:211] docker exec --privileged -t bridge-20220516225301-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 23:06:32.603422    5108 cli_runner.go:217] Completed: docker exec --privileged -t bridge-20220516225301-2444 /bin/bash -c "sudo init 0": (1.1343195s)
	I0516 23:06:32.603422    5108 oci.go:641] error shutdown bridge-20220516225301-2444: docker exec --privileged -t bridge-20220516225301-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:33.619578    5108 cli_runner.go:164] Run: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:34.702391    5108 cli_runner.go:211] docker container inspect bridge-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:34.702425    5108 cli_runner.go:217] Completed: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: (1.0825519s)
	I0516 23:06:34.702451    5108 oci.go:653] temporary error verifying shutdown: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:34.702451    5108 oci.go:655] temporary error: container bridge-20220516225301-2444 status is  but expect it to be exited
	I0516 23:06:34.702451    5108 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:35.189706    5108 cli_runner.go:164] Run: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:36.255015    5108 cli_runner.go:211] docker container inspect bridge-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:36.255015    5108 cli_runner.go:217] Completed: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: (1.0653001s)
	I0516 23:06:36.255015    5108 oci.go:653] temporary error verifying shutdown: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:36.255015    5108 oci.go:655] temporary error: container bridge-20220516225301-2444 status is  but expect it to be exited
	I0516 23:06:36.255015    5108 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:37.156500    5108 cli_runner.go:164] Run: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:38.191454    5108 cli_runner.go:211] docker container inspect bridge-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:38.191454    5108 cli_runner.go:217] Completed: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: (1.0349439s)
	I0516 23:06:38.191454    5108 oci.go:653] temporary error verifying shutdown: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:38.191454    5108 oci.go:655] temporary error: container bridge-20220516225301-2444 status is  but expect it to be exited
	I0516 23:06:38.191454    5108 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:38.847076    5108 cli_runner.go:164] Run: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:39.913961    5108 cli_runner.go:211] docker container inspect bridge-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:39.913999    5108 cli_runner.go:217] Completed: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: (1.0666963s)
	I0516 23:06:39.914079    5108 oci.go:653] temporary error verifying shutdown: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:39.914205    5108 oci.go:655] temporary error: container bridge-20220516225301-2444 status is  but expect it to be exited
	I0516 23:06:39.914246    5108 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:41.033092    5108 cli_runner.go:164] Run: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:42.085009    5108 cli_runner.go:211] docker container inspect bridge-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:42.085009    5108 cli_runner.go:217] Completed: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: (1.0519077s)
	I0516 23:06:42.085009    5108 oci.go:653] temporary error verifying shutdown: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:42.085009    5108 oci.go:655] temporary error: container bridge-20220516225301-2444 status is  but expect it to be exited
	I0516 23:06:42.085009    5108 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:43.618011    5108 cli_runner.go:164] Run: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:44.668379    5108 cli_runner.go:211] docker container inspect bridge-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:44.668379    5108 cli_runner.go:217] Completed: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: (1.0503586s)
	I0516 23:06:44.668379    5108 oci.go:653] temporary error verifying shutdown: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:44.668379    5108 oci.go:655] temporary error: container bridge-20220516225301-2444 status is  but expect it to be exited
	I0516 23:06:44.668379    5108 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:47.722511    5108 cli_runner.go:164] Run: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:48.767702    5108 cli_runner.go:211] docker container inspect bridge-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:48.767702    5108 cli_runner.go:217] Completed: docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: (1.0451353s)
	I0516 23:06:48.767702    5108 oci.go:653] temporary error verifying shutdown: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:06:48.767702    5108 oci.go:655] temporary error: container bridge-20220516225301-2444 status is  but expect it to be exited
	I0516 23:06:48.767702    5108 oci.go:88] couldn't shut down bridge-20220516225301-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "bridge-20220516225301-2444": docker container inspect bridge-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	 
	I0516 23:06:48.774705    5108 cli_runner.go:164] Run: docker rm -f -v bridge-20220516225301-2444
	I0516 23:06:49.857236    5108 cli_runner.go:217] Completed: docker rm -f -v bridge-20220516225301-2444: (1.0825215s)
	I0516 23:06:49.864236    5108 cli_runner.go:164] Run: docker container inspect -f {{.Id}} bridge-20220516225301-2444
	W0516 23:06:50.927876    5108 cli_runner.go:211] docker container inspect -f {{.Id}} bridge-20220516225301-2444 returned with exit code 1
	I0516 23:06:50.927876    5108 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} bridge-20220516225301-2444: (1.0635614s)
	I0516 23:06:50.938361    5108 cli_runner.go:164] Run: docker network inspect bridge-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:06:52.032683    5108 cli_runner.go:211] docker network inspect bridge-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:06:52.032683    5108 cli_runner.go:217] Completed: docker network inspect bridge-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.094312s)
	I0516 23:06:52.041598    5108 network_create.go:272] running [docker network inspect bridge-20220516225301-2444] to gather additional debugging logs...
	I0516 23:06:52.041598    5108 cli_runner.go:164] Run: docker network inspect bridge-20220516225301-2444
	W0516 23:06:53.115323    5108 cli_runner.go:211] docker network inspect bridge-20220516225301-2444 returned with exit code 1
	I0516 23:06:53.115323    5108 cli_runner.go:217] Completed: docker network inspect bridge-20220516225301-2444: (1.0737158s)
	I0516 23:06:53.115323    5108 network_create.go:275] error running [docker network inspect bridge-20220516225301-2444]: docker network inspect bridge-20220516225301-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20220516225301-2444
	I0516 23:06:53.115323    5108 network_create.go:277] output of [docker network inspect bridge-20220516225301-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20220516225301-2444
	
	** /stderr **
	W0516 23:06:53.116342    5108 delete.go:139] delete failed (probably ok) <nil>
	I0516 23:06:53.116342    5108 fix.go:115] Sleeping 1 second for extra luck!
	I0516 23:06:54.126927    5108 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:06:54.130752    5108 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:06:54.131113    5108 start.go:165] libmachine.API.Create for "bridge-20220516225301-2444" (driver="docker")
	I0516 23:06:54.131204    5108 client.go:168] LocalClient.Create starting
	I0516 23:06:54.131767    5108 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:06:54.132088    5108 main.go:134] libmachine: Decoding PEM data...
	I0516 23:06:54.132158    5108 main.go:134] libmachine: Parsing certificate...
	I0516 23:06:54.132362    5108 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:06:54.132627    5108 main.go:134] libmachine: Decoding PEM data...
	I0516 23:06:54.132691    5108 main.go:134] libmachine: Parsing certificate...
	I0516 23:06:54.142293    5108 cli_runner.go:164] Run: docker network inspect bridge-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:06:55.229226    5108 cli_runner.go:211] docker network inspect bridge-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:06:55.229226    5108 cli_runner.go:217] Completed: docker network inspect bridge-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0869229s)
	I0516 23:06:55.236229    5108 network_create.go:272] running [docker network inspect bridge-20220516225301-2444] to gather additional debugging logs...
	I0516 23:06:55.236229    5108 cli_runner.go:164] Run: docker network inspect bridge-20220516225301-2444
	W0516 23:06:56.285971    5108 cli_runner.go:211] docker network inspect bridge-20220516225301-2444 returned with exit code 1
	I0516 23:06:56.285971    5108 cli_runner.go:217] Completed: docker network inspect bridge-20220516225301-2444: (1.0497335s)
	I0516 23:06:56.285971    5108 network_create.go:275] error running [docker network inspect bridge-20220516225301-2444]: docker network inspect bridge-20220516225301-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20220516225301-2444
	I0516 23:06:56.285971    5108 network_create.go:277] output of [docker network inspect bridge-20220516225301-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20220516225301-2444
	
	** /stderr **
	I0516 23:06:56.293822    5108 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:06:57.353555    5108 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0596255s)
	I0516 23:06:57.371553    5108 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006240] amended:true}} dirty:map[192.168.49.0:0xc000006240 192.168.58.0:0xc000534ab8 192.168.67.0:0xc0007824f0 192.168.76.0:0xc000534b50] misses:2}
	I0516 23:06:57.371553    5108 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:57.384719    5108 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006240] amended:true}} dirty:map[192.168.49.0:0xc000006240 192.168.58.0:0xc000534ab8 192.168.67.0:0xc0007824f0 192.168.76.0:0xc000534b50] misses:3}
	I0516 23:06:57.384719    5108 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:57.403883    5108 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006240 192.168.58.0:0xc000534ab8 192.168.67.0:0xc0007824f0 192.168.76.0:0xc000534b50] amended:false}} dirty:map[] misses:0}
	I0516 23:06:57.403883    5108 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:57.418727    5108 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006240 192.168.58.0:0xc000534ab8 192.168.67.0:0xc0007824f0 192.168.76.0:0xc000534b50] amended:false}} dirty:map[] misses:0}
	I0516 23:06:57.418727    5108 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:57.434724    5108 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006240 192.168.58.0:0xc000534ab8 192.168.67.0:0xc0007824f0 192.168.76.0:0xc000534b50] amended:true}} dirty:map[192.168.49.0:0xc000006240 192.168.58.0:0xc000534ab8 192.168.67.0:0xc0007824f0 192.168.76.0:0xc000534b50 192.168.85.0:0xc0000063d0] misses:0}
	I0516 23:06:57.434767    5108 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:57.434767    5108 network_create.go:115] attempt to create docker network bridge-20220516225301-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 23:06:57.442238    5108 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444
	W0516 23:06:58.506870    5108 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444 returned with exit code 1
	I0516 23:06:58.507018    5108 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444: (1.0644159s)
	E0516 23:06:58.507144    5108 network_create.go:104] error while trying to create docker network bridge-20220516225301-2444 192.168.85.0/24: create docker network bridge-20220516225301-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f3014d927ac2603fbe7595efa7143df6bb5eda0a2e032305a20eab58f121716a (br-f3014d927ac2): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 23:06:58.507144    5108 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network bridge-20220516225301-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f3014d927ac2603fbe7595efa7143df6bb5eda0a2e032305a20eab58f121716a (br-f3014d927ac2): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network bridge-20220516225301-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f3014d927ac2603fbe7595efa7143df6bb5eda0a2e032305a20eab58f121716a (br-f3014d927ac2): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 23:06:58.524001    5108 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:06:59.610068    5108 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0859056s)
	I0516 23:06:59.617680    5108 cli_runner.go:164] Run: docker volume create bridge-20220516225301-2444 --label name.minikube.sigs.k8s.io=bridge-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:07:00.696447    5108 cli_runner.go:211] docker volume create bridge-20220516225301-2444 --label name.minikube.sigs.k8s.io=bridge-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:07:00.696566    5108 cli_runner.go:217] Completed: docker volume create bridge-20220516225301-2444 --label name.minikube.sigs.k8s.io=bridge-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0785356s)
	I0516 23:07:00.696638    5108 client.go:171] LocalClient.Create took 6.5653759s
	I0516 23:07:02.716186    5108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:07:02.726073    5108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444
	W0516 23:07:03.834812    5108 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444 returned with exit code 1
	I0516 23:07:03.834879    5108 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: (1.1086916s)
	I0516 23:07:03.834879    5108 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:07:04.175408    5108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444
	W0516 23:07:05.291014    5108 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444 returned with exit code 1
	I0516 23:07:05.291160    5108 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: (1.1155957s)
	W0516 23:07:05.291456    5108 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	
	W0516 23:07:05.291529    5108 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:07:05.303026    5108 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:07:05.310094    5108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444
	W0516 23:07:06.389009    5108 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444 returned with exit code 1
	I0516 23:07:06.389040    5108 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: (1.078745s)
	I0516 23:07:06.389244    5108 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:07:06.623067    5108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444
	W0516 23:07:07.739233    5108 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444 returned with exit code 1
	I0516 23:07:07.739281    5108 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: (1.1159213s)
	W0516 23:07:07.739461    5108 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	
	W0516 23:07:07.739565    5108 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:07:07.739642    5108 start.go:134] duration metric: createHost completed in 13.6125413s
	I0516 23:07:07.751798    5108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:07:07.758644    5108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444
	W0516 23:07:08.889124    5108 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444 returned with exit code 1
	I0516 23:07:08.889402    5108 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: (1.1304705s)
	I0516 23:07:08.889487    5108 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:07:09.155602    5108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444
	W0516 23:07:10.217160    5108 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444 returned with exit code 1
	I0516 23:07:10.217160    5108 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: (1.0615482s)
	W0516 23:07:10.217160    5108 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	
	W0516 23:07:10.217160    5108 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:07:10.228215    5108 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:07:10.235855    5108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444
	W0516 23:07:11.320483    5108 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444 returned with exit code 1
	I0516 23:07:11.320483    5108 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: (1.0846175s)
	I0516 23:07:11.320483    5108 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:07:11.531489    5108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444
	W0516 23:07:12.625961    5108 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444 returned with exit code 1
	I0516 23:07:12.625961    5108 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: (1.0944622s)
	W0516 23:07:12.625961    5108 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	
	W0516 23:07:12.625961    5108 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220516225301-2444
	I0516 23:07:12.625961    5108 fix.go:57] fixHost completed within 46.441797s
	I0516 23:07:12.625961    5108 start.go:81] releasing machines lock for "bridge-20220516225301-2444", held for 46.441797s
	W0516 23:07:12.625961    5108 out.go:239] * Failed to start docker container. Running "minikube delete -p bridge-20220516225301-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for bridge-20220516225301-2444 container: docker volume create bridge-20220516225301-2444 --label name.minikube.sigs.k8s.io=bridge-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/bridge-20220516225301-2444': mkdir /var/lib/docker/volumes/bridge-20220516225301-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p bridge-20220516225301-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for bridge-20220516225301-2444 container: docker volume create bridge-20220516225301-2444 --label name.minikube.sigs.k8s.io=bridge-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/bridge-20220516225301-2444': mkdir /var/lib/docker/volumes/bridge-20220516225301-2444: read-only file system
	
	I0516 23:07:12.632423    5108 out.go:177] 
	W0516 23:07:12.634507    5108 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for bridge-20220516225301-2444 container: docker volume create bridge-20220516225301-2444 --label name.minikube.sigs.k8s.io=bridge-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/bridge-20220516225301-2444': mkdir /var/lib/docker/volumes/bridge-20220516225301-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for bridge-20220516225301-2444 container: docker volume create bridge-20220516225301-2444 --label name.minikube.sigs.k8s.io=bridge-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/bridge-20220516225301-2444': mkdir /var/lib/docker/volumes/bridge-20220516225301-2444: read-only file system
	
	W0516 23:07:12.634507    5108 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 23:07:12.635417    5108 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 23:07:12.638252    5108 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/bridge/Start (81.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (81.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-20220516225301-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubenet-20220516225301-2444 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: exit status 60 (1m21.186935s)

                                                
                                                
-- stdout --
	* [kubenet-20220516225301-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kubenet-20220516225301-2444 in cluster kubenet-20220516225301-2444
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "kubenet-20220516225301-2444" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 23:05:55.943646    1284 out.go:296] Setting OutFile to fd 1576 ...
	I0516 23:05:56.010564    1284 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:05:56.010624    1284 out.go:309] Setting ErrFile to fd 1460...
	I0516 23:05:56.010669    1284 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 23:05:56.024949    1284 out.go:303] Setting JSON to false
	I0516 23:05:56.028039    1284 start.go:115] hostinfo: {"hostname":"minikube2","uptime":5468,"bootTime":1652736888,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 23:05:56.028039    1284 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 23:05:56.040005    1284 out.go:177] * [kubenet-20220516225301-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 23:05:56.043634    1284 notify.go:193] Checking for updates...
	I0516 23:05:56.046089    1284 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 23:05:56.049427    1284 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 23:05:56.052132    1284 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 23:05:56.055335    1284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 23:05:56.059451    1284 config.go:178] Loaded profile config "enable-default-cni-20220516225301-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:05:56.059496    1284 config.go:178] Loaded profile config "kindnet-20220516225309-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:05:56.060091    1284 config.go:178] Loaded profile config "multinode-20220516223121-2444-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 23:05:56.060091    1284 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 23:05:58.820184    1284 docker.go:137] docker version: linux-20.10.14
	I0516 23:05:58.828872    1284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:06:01.033177    1284 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.204286s)
	I0516 23:06:01.033652    1284 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-16 23:05:59.9032683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:06:01.037857    1284 out.go:177] * Using the docker driver based on user configuration
	I0516 23:06:01.044212    1284 start.go:284] selected driver: docker
	I0516 23:06:01.044310    1284 start.go:806] validating driver "docker" against <nil>
	I0516 23:06:01.044352    1284 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 23:06:01.114237    1284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 23:06:03.333382    1284 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.219125s)
	I0516 23:06:03.334096    1284 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-05-16 23:06:02.2383929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 23:06:03.334582    1284 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 23:06:03.335701    1284 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0516 23:06:03.338882    1284 out.go:177] * Using Docker Desktop driver with the root privilege
	I0516 23:06:03.341025    1284 cni.go:91] network plugin configured as "kubenet", returning disabled
	I0516 23:06:03.341025    1284 start_flags.go:306] config:
	{Name:kubenet-20220516225301-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kubenet-20220516225301-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 23:06:03.343958    1284 out.go:177] * Starting control plane node kubenet-20220516225301-2444 in cluster kubenet-20220516225301-2444
	I0516 23:06:03.351266    1284 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 23:06:03.353601    1284 out.go:177] * Pulling base image ...
	I0516 23:06:03.356384    1284 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 23:06:03.356786    1284 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 23:06:03.356786    1284 cache.go:57] Caching tarball of preloaded images
	I0516 23:06:03.356786    1284 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 23:06:03.356786    1284 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0516 23:06:03.357595    1284 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0516 23:06:03.357595    1284 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubenet-20220516225301-2444\config.json ...
	I0516 23:06:03.358132    1284 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubenet-20220516225301-2444\config.json: {Name:mk26a7fd50dae09442c1457b88792608987172cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 23:06:04.498768    1284 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 23:06:04.498908    1284 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:06:04.499310    1284 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:06:04.499365    1284 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 23:06:04.499365    1284 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 23:06:04.499365    1284 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 23:06:04.499365    1284 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 23:06:04.499365    1284 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from local cache
	I0516 23:06:04.499365    1284 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 23:06:06.893207    1284 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c from cached tarball
	I0516 23:06:06.893207    1284 cache.go:206] Successfully downloaded all kic artifacts
	I0516 23:06:06.893207    1284 start.go:352] acquiring machines lock for kubenet-20220516225301-2444: {Name:mkc6455833424c28b8d4ffee2207efd4c1b99a93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:06:06.893207    1284 start.go:356] acquired machines lock for "kubenet-20220516225301-2444" in 0s
	I0516 23:06:06.893870    1284 start.go:91] Provisioning new machine with config: &{Name:kubenet-20220516225301-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kubenet-20220516225301-2444 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0516 23:06:06.894162    1284 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:06:06.899618    1284 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:06:06.900254    1284 start.go:165] libmachine.API.Create for "kubenet-20220516225301-2444" (driver="docker")
	I0516 23:06:06.900254    1284 client.go:168] LocalClient.Create starting
	I0516 23:06:06.901192    1284 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:06:06.901192    1284 main.go:134] libmachine: Decoding PEM data...
	I0516 23:06:06.901192    1284 main.go:134] libmachine: Parsing certificate...
	I0516 23:06:06.901192    1284 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:06:06.904613    1284 main.go:134] libmachine: Decoding PEM data...
	I0516 23:06:06.904613    1284 main.go:134] libmachine: Parsing certificate...
	I0516 23:06:06.922785    1284 cli_runner.go:164] Run: docker network inspect kubenet-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:06:08.032784    1284 cli_runner.go:211] docker network inspect kubenet-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:06:08.032831    1284 cli_runner.go:217] Completed: docker network inspect kubenet-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1098012s)
	I0516 23:06:08.043155    1284 network_create.go:272] running [docker network inspect kubenet-20220516225301-2444] to gather additional debugging logs...
	I0516 23:06:08.043258    1284 cli_runner.go:164] Run: docker network inspect kubenet-20220516225301-2444
	W0516 23:06:09.150877    1284 cli_runner.go:211] docker network inspect kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:06:09.150877    1284 cli_runner.go:217] Completed: docker network inspect kubenet-20220516225301-2444: (1.1072155s)
	I0516 23:06:09.150877    1284 network_create.go:275] error running [docker network inspect kubenet-20220516225301-2444]: docker network inspect kubenet-20220516225301-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20220516225301-2444
	I0516 23:06:09.150877    1284 network_create.go:277] output of [docker network inspect kubenet-20220516225301-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20220516225301-2444
	
	** /stderr **
	I0516 23:06:09.160661    1284 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:06:10.278391    1284 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1175531s)
	I0516 23:06:10.300309    1284 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005161f0] misses:0}
	I0516 23:06:10.300309    1284 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:10.300309    1284 network_create.go:115] attempt to create docker network kubenet-20220516225301-2444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0516 23:06:10.308519    1284 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444
	W0516 23:06:11.470990    1284 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:06:11.471048    1284 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444: (1.162308s)
	W0516 23:06:11.471048    1284 network_create.go:107] failed to create docker network kubenet-20220516225301-2444 192.168.49.0/24, will retry: subnet is taken
	I0516 23:06:11.492854    1284 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005161f0] amended:false}} dirty:map[] misses:0}
	I0516 23:06:11.492854    1284 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:11.509876    1284 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005161f0] amended:true}} dirty:map[192.168.49.0:0xc0005161f0 192.168.58.0:0xc000516448] misses:0}
	I0516 23:06:11.509876    1284 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:11.509876    1284 network_create.go:115] attempt to create docker network kubenet-20220516225301-2444 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0516 23:06:11.522747    1284 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444
	W0516 23:06:12.639085    1284 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:06:12.639085    1284 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444: (1.1163281s)
	W0516 23:06:12.639085    1284 network_create.go:107] failed to create docker network kubenet-20220516225301-2444 192.168.58.0/24, will retry: subnet is taken
	I0516 23:06:12.660818    1284 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005161f0] amended:true}} dirty:map[192.168.49.0:0xc0005161f0 192.168.58.0:0xc000516448] misses:1}
	I0516 23:06:12.660818    1284 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:12.680157    1284 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005161f0] amended:true}} dirty:map[192.168.49.0:0xc0005161f0 192.168.58.0:0xc000516448 192.168.67.0:0xc0007125e0] misses:1}
	I0516 23:06:12.680157    1284 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:12.680157    1284 network_create.go:115] attempt to create docker network kubenet-20220516225301-2444 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0516 23:06:12.687534    1284 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444
	W0516 23:06:13.820477    1284 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:06:13.820477    1284 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444: (1.1328789s)
	W0516 23:06:13.820477    1284 network_create.go:107] failed to create docker network kubenet-20220516225301-2444 192.168.67.0/24, will retry: subnet is taken
	I0516 23:06:13.838479    1284 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005161f0] amended:true}} dirty:map[192.168.49.0:0xc0005161f0 192.168.58.0:0xc000516448 192.168.67.0:0xc0007125e0] misses:2}
	I0516 23:06:13.839525    1284 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:13.858539    1284 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005161f0] amended:true}} dirty:map[192.168.49.0:0xc0005161f0 192.168.58.0:0xc000516448 192.168.67.0:0xc0007125e0 192.168.76.0:0xc0005164e0] misses:2}
	I0516 23:06:13.858539    1284 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:06:13.858539    1284 network_create.go:115] attempt to create docker network kubenet-20220516225301-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0516 23:06:13.868854    1284 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444
	W0516 23:06:14.964554    1284 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:06:14.964554    1284 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444: (1.0956904s)
	E0516 23:06:14.964554    1284 network_create.go:104] error while trying to create docker network kubenet-20220516225301-2444 192.168.76.0/24: create docker network kubenet-20220516225301-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 526b2207bcd5c1e02ff03df019e0de9f78998af48faed1ebd1385c84e4c35a40 (br-526b2207bcd5): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	W0516 23:06:14.964554    1284 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubenet-20220516225301-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 526b2207bcd5c1e02ff03df019e0de9f78998af48faed1ebd1385c84e4c35a40 (br-526b2207bcd5): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubenet-20220516225301-2444 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 526b2207bcd5c1e02ff03df019e0de9f78998af48faed1ebd1385c84e4c35a40 (br-526b2207bcd5): conflicts with network 301630a99a7e980ec2819bd624b6571637620e51d82523946c0768bb28b51663 (br-301630a99a7e): networks have overlapping IPv4
	
	I0516 23:06:14.981042    1284 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:06:16.074992    1284 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.093941s)
	I0516 23:06:16.084304    1284 cli_runner.go:164] Run: docker volume create kubenet-20220516225301-2444 --label name.minikube.sigs.k8s.io=kubenet-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:06:17.173344    1284 cli_runner.go:211] docker volume create kubenet-20220516225301-2444 --label name.minikube.sigs.k8s.io=kubenet-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:06:17.173377    1284 cli_runner.go:217] Completed: docker volume create kubenet-20220516225301-2444 --label name.minikube.sigs.k8s.io=kubenet-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: (1.0888676s)
	I0516 23:06:17.173582    1284 client.go:171] LocalClient.Create took 10.2732374s
	I0516 23:06:19.189924    1284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:06:19.197021    1284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444
	W0516 23:06:20.269061    1284 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:06:20.269287    1284 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: (1.0718249s)
	I0516 23:06:20.269413    1284 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:20.568183    1284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444
	W0516 23:06:21.634537    1284 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:06:21.634537    1284 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: (1.066345s)
	W0516 23:06:21.634537    1284 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	
	W0516 23:06:21.634537    1284 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:21.646147    1284 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:06:21.654104    1284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444
	W0516 23:06:22.722641    1284 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:06:22.722712    1284 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: (1.068437s)
	I0516 23:06:22.722712    1284 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:23.028470    1284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444
	W0516 23:06:24.122910    1284 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:06:24.122989    1284 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: (1.094217s)
	W0516 23:06:24.123185    1284 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	
	W0516 23:06:24.123239    1284 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:24.123239    1284 start.go:134] duration metric: createHost completed in 17.228926s
	I0516 23:06:24.123239    1284 start.go:81] releasing machines lock for "kubenet-20220516225301-2444", held for 17.2298804s
	W0516 23:06:24.123239    1284 start.go:608] error starting host: creating host: create: creating: setting up container node: creating volume for kubenet-20220516225301-2444 container: docker volume create kubenet-20220516225301-2444 --label name.minikube.sigs.k8s.io=kubenet-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220516225301-2444': mkdir /var/lib/docker/volumes/kubenet-20220516225301-2444: read-only file system
	I0516 23:06:24.141680    1284 cli_runner.go:164] Run: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:25.201391    1284 cli_runner.go:211] docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:25.201455    1284 cli_runner.go:217] Completed: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: (1.0595256s)
	I0516 23:06:25.201513    1284 delete.go:82] Unable to get host status for kubenet-20220516225301-2444, assuming it has already been deleted: state: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	W0516 23:06:25.201513    1284 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubenet-20220516225301-2444 container: docker volume create kubenet-20220516225301-2444 --label name.minikube.sigs.k8s.io=kubenet-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220516225301-2444': mkdir /var/lib/docker/volumes/kubenet-20220516225301-2444: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubenet-20220516225301-2444 container: docker volume create kubenet-20220516225301-2444 --label name.minikube.sigs.k8s.io=kubenet-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220516225301-2444': mkdir /var/lib/docker/volumes/kubenet-20220516225301-2444: read-only file system
	
	I0516 23:06:25.201513    1284 start.go:623] Will try again in 5 seconds ...
	I0516 23:06:30.210607    1284 start.go:352] acquiring machines lock for kubenet-20220516225301-2444: {Name:mkc6455833424c28b8d4ffee2207efd4c1b99a93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0516 23:06:30.211022    1284 start.go:356] acquired machines lock for "kubenet-20220516225301-2444" in 236.5µs
	I0516 23:06:30.211298    1284 start.go:94] Skipping create...Using existing machine configuration
	I0516 23:06:30.211352    1284 fix.go:55] fixHost starting: 
	I0516 23:06:30.229648    1284 cli_runner.go:164] Run: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:31.306493    1284 cli_runner.go:211] docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:31.306493    1284 cli_runner.go:217] Completed: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: (1.0768356s)
	I0516 23:06:31.306493    1284 fix.go:103] recreateIfNeeded on kubenet-20220516225301-2444: state= err=unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:31.306493    1284 fix.go:108] machineExists: false. err=machine does not exist
	I0516 23:06:31.312247    1284 out.go:177] * docker "kubenet-20220516225301-2444" container is missing, will recreate.
	I0516 23:06:31.314616    1284 delete.go:124] DEMOLISHING kubenet-20220516225301-2444 ...
	I0516 23:06:31.328328    1284 cli_runner.go:164] Run: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:32.403148    1284 cli_runner.go:211] docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:32.403148    1284 cli_runner.go:217] Completed: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: (1.0748103s)
	W0516 23:06:32.403148    1284 stop.go:75] unable to get state: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:32.403148    1284 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:32.419523    1284 cli_runner.go:164] Run: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:33.487108    1284 cli_runner.go:211] docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:33.487108    1284 cli_runner.go:217] Completed: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: (1.0674514s)
	I0516 23:06:33.487250    1284 delete.go:82] Unable to get host status for kubenet-20220516225301-2444, assuming it has already been deleted: state: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:33.497703    1284 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubenet-20220516225301-2444
	W0516 23:06:34.579009    1284 cli_runner.go:211] docker container inspect -f {{.Id}} kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:06:34.579066    1284 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kubenet-20220516225301-2444: (1.0812368s)
	I0516 23:06:34.579108    1284 kic.go:356] could not find the container kubenet-20220516225301-2444 to remove it. will try anyways
	I0516 23:06:34.588898    1284 cli_runner.go:164] Run: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:35.619999    1284 cli_runner.go:211] docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:35.619999    1284 cli_runner.go:217] Completed: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: (1.0308973s)
	W0516 23:06:35.619999    1284 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:35.627641    1284 cli_runner.go:164] Run: docker exec --privileged -t kubenet-20220516225301-2444 /bin/bash -c "sudo init 0"
	W0516 23:06:36.687208    1284 cli_runner.go:211] docker exec --privileged -t kubenet-20220516225301-2444 /bin/bash -c "sudo init 0" returned with exit code 1
	I0516 23:06:36.687355    1284 cli_runner.go:217] Completed: docker exec --privileged -t kubenet-20220516225301-2444 /bin/bash -c "sudo init 0": (1.0594221s)
	I0516 23:06:36.687411    1284 oci.go:641] error shutdown kubenet-20220516225301-2444: docker exec --privileged -t kubenet-20220516225301-2444 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:37.711012    1284 cli_runner.go:164] Run: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:38.790512    1284 cli_runner.go:211] docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:38.790588    1284 cli_runner.go:217] Completed: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: (1.0793355s)
	I0516 23:06:38.790617    1284 oci.go:653] temporary error verifying shutdown: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:38.790743    1284 oci.go:655] temporary error: container kubenet-20220516225301-2444 status is  but expect it to be exited
	I0516 23:06:38.790785    1284 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:39.275668    1284 cli_runner.go:164] Run: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:40.339719    1284 cli_runner.go:211] docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:40.339886    1284 cli_runner.go:217] Completed: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: (1.0640083s)
	I0516 23:06:40.339932    1284 oci.go:653] temporary error verifying shutdown: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:40.339932    1284 oci.go:655] temporary error: container kubenet-20220516225301-2444 status is  but expect it to be exited
	I0516 23:06:40.340068    1284 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:41.252932    1284 cli_runner.go:164] Run: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:42.299021    1284 cli_runner.go:211] docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:42.299021    1284 cli_runner.go:217] Completed: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: (1.0460795s)
	I0516 23:06:42.299021    1284 oci.go:653] temporary error verifying shutdown: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:42.299021    1284 oci.go:655] temporary error: container kubenet-20220516225301-2444 status is  but expect it to be exited
	I0516 23:06:42.299021    1284 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:42.957974    1284 cli_runner.go:164] Run: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:44.006804    1284 cli_runner.go:211] docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:44.006804    1284 cli_runner.go:217] Completed: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: (1.0487059s)
	I0516 23:06:44.006804    1284 oci.go:653] temporary error verifying shutdown: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:44.006804    1284 oci.go:655] temporary error: container kubenet-20220516225301-2444 status is  but expect it to be exited
	I0516 23:06:44.006804    1284 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:45.129098    1284 cli_runner.go:164] Run: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:46.171239    1284 cli_runner.go:211] docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:46.171239    1284 cli_runner.go:217] Completed: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: (1.0419491s)
	I0516 23:06:46.171239    1284 oci.go:653] temporary error verifying shutdown: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:46.171239    1284 oci.go:655] temporary error: container kubenet-20220516225301-2444 status is  but expect it to be exited
	I0516 23:06:46.171239    1284 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:47.707524    1284 cli_runner.go:164] Run: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:48.751815    1284 cli_runner.go:211] docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:48.751955    1284 cli_runner.go:217] Completed: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: (1.0442463s)
	I0516 23:06:48.752041    1284 oci.go:653] temporary error verifying shutdown: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:48.752067    1284 oci.go:655] temporary error: container kubenet-20220516225301-2444 status is  but expect it to be exited
	I0516 23:06:48.752127    1284 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:51.808473    1284 cli_runner.go:164] Run: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}
	W0516 23:06:52.927229    1284 cli_runner.go:211] docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}} returned with exit code 1
	I0516 23:06:52.927229    1284 cli_runner.go:217] Completed: docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: (1.1187458s)
	I0516 23:06:52.927229    1284 oci.go:653] temporary error verifying shutdown: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:06:52.927229    1284 oci.go:655] temporary error: container kubenet-20220516225301-2444 status is  but expect it to be exited
	I0516 23:06:52.927229    1284 oci.go:88] couldn't shut down kubenet-20220516225301-2444 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubenet-20220516225301-2444": docker container inspect kubenet-20220516225301-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	 
	I0516 23:06:52.935922    1284 cli_runner.go:164] Run: docker rm -f -v kubenet-20220516225301-2444
	I0516 23:06:53.973053    1284 cli_runner.go:217] Completed: docker rm -f -v kubenet-20220516225301-2444: (1.0371224s)
	I0516 23:06:53.982843    1284 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubenet-20220516225301-2444
	W0516 23:06:55.081814    1284 cli_runner.go:211] docker container inspect -f {{.Id}} kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:06:55.081814    1284 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kubenet-20220516225301-2444: (1.0988682s)
	I0516 23:06:55.091647    1284 cli_runner.go:164] Run: docker network inspect kubenet-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:06:56.178944    1284 cli_runner.go:211] docker network inspect kubenet-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:06:56.178944    1284 cli_runner.go:217] Completed: docker network inspect kubenet-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0872875s)
	I0516 23:06:56.187887    1284 network_create.go:272] running [docker network inspect kubenet-20220516225301-2444] to gather additional debugging logs...
	I0516 23:06:56.187887    1284 cli_runner.go:164] Run: docker network inspect kubenet-20220516225301-2444
	W0516 23:06:57.260644    1284 cli_runner.go:211] docker network inspect kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:06:57.260644    1284 cli_runner.go:217] Completed: docker network inspect kubenet-20220516225301-2444: (1.072748s)
	I0516 23:06:57.260644    1284 network_create.go:275] error running [docker network inspect kubenet-20220516225301-2444]: docker network inspect kubenet-20220516225301-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20220516225301-2444
	I0516 23:06:57.260644    1284 network_create.go:277] output of [docker network inspect kubenet-20220516225301-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20220516225301-2444
	
	** /stderr **
	W0516 23:06:57.261650    1284 delete.go:139] delete failed (probably ok) <nil>
	I0516 23:06:57.261650    1284 fix.go:115] Sleeping 1 second for extra luck!
	I0516 23:06:58.273249    1284 start.go:131] createHost starting for "" (driver="docker")
	I0516 23:06:58.278952    1284 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0516 23:06:58.279793    1284 start.go:165] libmachine.API.Create for "kubenet-20220516225301-2444" (driver="docker")
	I0516 23:06:58.279849    1284 client.go:168] LocalClient.Create starting
	I0516 23:06:58.279849    1284 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0516 23:06:58.280381    1284 main.go:134] libmachine: Decoding PEM data...
	I0516 23:06:58.280515    1284 main.go:134] libmachine: Parsing certificate...
	I0516 23:06:58.280795    1284 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0516 23:06:58.281149    1284 main.go:134] libmachine: Decoding PEM data...
	I0516 23:06:58.281238    1284 main.go:134] libmachine: Parsing certificate...
	I0516 23:06:58.296526    1284 cli_runner.go:164] Run: docker network inspect kubenet-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0516 23:06:59.391794    1284 cli_runner.go:211] docker network inspect kubenet-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0516 23:06:59.391794    1284 cli_runner.go:217] Completed: docker network inspect kubenet-20220516225301-2444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0950419s)
	I0516 23:06:59.399837    1284 network_create.go:272] running [docker network inspect kubenet-20220516225301-2444] to gather additional debugging logs...
	I0516 23:06:59.399837    1284 cli_runner.go:164] Run: docker network inspect kubenet-20220516225301-2444
	W0516 23:07:00.494251    1284 cli_runner.go:211] docker network inspect kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:07:00.494326    1284 cli_runner.go:217] Completed: docker network inspect kubenet-20220516225301-2444: (1.0942872s)
	I0516 23:07:00.494355    1284 network_create.go:275] error running [docker network inspect kubenet-20220516225301-2444]: docker network inspect kubenet-20220516225301-2444: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20220516225301-2444
	I0516 23:07:00.494381    1284 network_create.go:277] output of [docker network inspect kubenet-20220516225301-2444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20220516225301-2444
	
	** /stderr **
	I0516 23:07:00.504509    1284 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0516 23:07:01.583378    1284 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0788594s)
	I0516 23:07:01.609775    1284 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005161f0] amended:true}} dirty:map[192.168.49.0:0xc0005161f0 192.168.58.0:0xc000516448 192.168.67.0:0xc0007125e0 192.168.76.0:0xc0005164e0] misses:2}
	I0516 23:07:01.609907    1284 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:07:01.627649    1284 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005161f0] amended:true}} dirty:map[192.168.49.0:0xc0005161f0 192.168.58.0:0xc000516448 192.168.67.0:0xc0007125e0 192.168.76.0:0xc0005164e0] misses:3}
	I0516 23:07:01.627649    1284 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:07:01.643753    1284 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005161f0 192.168.58.0:0xc000516448 192.168.67.0:0xc0007125e0 192.168.76.0:0xc0005164e0] amended:false}} dirty:map[] misses:0}
	I0516 23:07:01.643961    1284 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:07:01.658279    1284 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005161f0 192.168.58.0:0xc000516448 192.168.67.0:0xc0007125e0 192.168.76.0:0xc0005164e0] amended:false}} dirty:map[] misses:0}
	I0516 23:07:01.658279    1284 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:07:01.672708    1284 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005161f0 192.168.58.0:0xc000516448 192.168.67.0:0xc0007125e0 192.168.76.0:0xc0005164e0] amended:true}} dirty:map[192.168.49.0:0xc0005161f0 192.168.58.0:0xc000516448 192.168.67.0:0xc0007125e0 192.168.76.0:0xc0005164e0 192.168.85.0:0xc00014ed80] misses:0}
	I0516 23:07:01.673513    1284 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0516 23:07:01.673513    1284 network_create.go:115] attempt to create docker network kubenet-20220516225301-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0516 23:07:01.681027    1284 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444
	W0516 23:07:02.733247    1284 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:07:02.733353    1284 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444: (1.05212s)
	E0516 23:07:02.733412    1284 network_create.go:104] error while trying to create docker network kubenet-20220516225301-2444 192.168.85.0/24: create docker network kubenet-20220516225301-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1502527e9676a6700974db758a905e5ada07d9d4adb64d3b3f1237926ddef49d (br-1502527e9676): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	W0516 23:07:02.733649    1284 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubenet-20220516225301-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1502527e9676a6700974db758a905e5ada07d9d4adb64d3b3f1237926ddef49d (br-1502527e9676): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubenet-20220516225301-2444 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220516225301-2444: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1502527e9676a6700974db758a905e5ada07d9d4adb64d3b3f1237926ddef49d (br-1502527e9676): conflicts with network ea4bbeff936d23c670c69e9c998b4c9bf7e8e8d0d152a59c317a03cffa093357 (br-ea4bbeff936d): networks have overlapping IPv4
	
	I0516 23:07:02.751124    1284 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0516 23:07:03.865034    1284 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1137215s)
	I0516 23:07:03.872415    1284 cli_runner.go:164] Run: docker volume create kubenet-20220516225301-2444 --label name.minikube.sigs.k8s.io=kubenet-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true
	W0516 23:07:04.959415    1284 cli_runner.go:211] docker volume create kubenet-20220516225301-2444 --label name.minikube.sigs.k8s.io=kubenet-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0516 23:07:04.959415    1284 cli_runner.go:217] Completed: docker volume create kubenet-20220516225301-2444 --label name.minikube.sigs.k8s.io=kubenet-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: (1.08699s)
	I0516 23:07:04.959597    1284 client.go:171] LocalClient.Create took 6.6796883s
	I0516 23:07:06.983931    1284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:07:06.991029    1284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444
	W0516 23:07:08.128332    1284 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:07:08.128332    1284 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: (1.1372932s)
	I0516 23:07:08.128332    1284 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:07:08.475394    1284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444
	W0516 23:07:09.600894    1284 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:07:09.600894    1284 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: (1.1252572s)
	W0516 23:07:09.600894    1284 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	
	W0516 23:07:09.600894    1284 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:07:09.612569    1284 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:07:09.620345    1284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444
	W0516 23:07:10.656432    1284 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:07:10.656465    1284 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: (1.0359315s)
	I0516 23:07:10.656661    1284 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:07:10.888639    1284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444
	W0516 23:07:11.964621    1284 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:07:11.964686    1284 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: (1.0758004s)
	W0516 23:07:11.964686    1284 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	
	W0516 23:07:11.964686    1284 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:07:11.964686    1284 start.go:134] duration metric: createHost completed in 13.6910321s
	I0516 23:07:11.976527    1284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0516 23:07:11.983556    1284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444
	W0516 23:07:13.076835    1284 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:07:13.077022    1284 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: (1.0932688s)
	I0516 23:07:13.077251    1284 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:07:13.337096    1284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444
	W0516 23:07:14.408029    1284 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:07:14.408029    1284 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: (1.070924s)
	W0516 23:07:14.408029    1284 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	
	W0516 23:07:14.408029    1284 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:07:14.419960    1284 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0516 23:07:14.426995    1284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444
	W0516 23:07:15.498712    1284 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:07:15.498712    1284 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: (1.0717079s)
	I0516 23:07:15.498712    1284 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:07:15.711915    1284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444
	W0516 23:07:16.822148    1284 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444 returned with exit code 1
	I0516 23:07:16.822148    1284 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: (1.1101004s)
	W0516 23:07:16.822148    1284 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	
	W0516 23:07:16.822148    1284 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220516225301-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220516225301-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220516225301-2444
	I0516 23:07:16.822148    1284 fix.go:57] fixHost completed within 46.6103802s
	I0516 23:07:16.822148    1284 start.go:81] releasing machines lock for "kubenet-20220516225301-2444", held for 46.6106628s
	W0516 23:07:16.822843    1284 out.go:239] * Failed to start docker container. Running "minikube delete -p kubenet-20220516225301-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubenet-20220516225301-2444 container: docker volume create kubenet-20220516225301-2444 --label name.minikube.sigs.k8s.io=kubenet-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220516225301-2444': mkdir /var/lib/docker/volumes/kubenet-20220516225301-2444: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p kubenet-20220516225301-2444" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubenet-20220516225301-2444 container: docker volume create kubenet-20220516225301-2444 --label name.minikube.sigs.k8s.io=kubenet-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220516225301-2444': mkdir /var/lib/docker/volumes/kubenet-20220516225301-2444: read-only file system
	
	I0516 23:07:16.825835    1284 out.go:177] 
	W0516 23:07:16.831829    1284 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubenet-20220516225301-2444 container: docker volume create kubenet-20220516225301-2444 --label name.minikube.sigs.k8s.io=kubenet-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220516225301-2444': mkdir /var/lib/docker/volumes/kubenet-20220516225301-2444: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubenet-20220516225301-2444 container: docker volume create kubenet-20220516225301-2444 --label name.minikube.sigs.k8s.io=kubenet-20220516225301-2444 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220516225301-2444: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220516225301-2444': mkdir /var/lib/docker/volumes/kubenet-20220516225301-2444: read-only file system
	
	W0516 23:07:16.831829    1284 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0516 23:07:16.831829    1284 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0516 23:07:16.834831    1284 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/kubenet/Start (81.26s)

                                                
                                    

Test pass (50/219)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.53
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.76
10 TestDownloadOnly/v1.23.6/json-events 13.28
11 TestDownloadOnly/v1.23.6/preload-exists 0
14 TestDownloadOnly/v1.23.6/kubectl 0
15 TestDownloadOnly/v1.23.6/LogsDuration 0.62
16 TestDownloadOnly/DeleteAll 11.15
17 TestDownloadOnly/DeleteAlwaysSucceeds 6.98
18 TestDownloadOnlyKic 45.43
19 TestBinaryMirror 16.82
33 TestErrorSpam/start 21.16
34 TestErrorSpam/status 8.2
35 TestErrorSpam/pause 9.11
36 TestErrorSpam/unpause 9.05
37 TestErrorSpam/stop 66.01
40 TestFunctional/serial/CopySyncFile 0.03
48 TestFunctional/serial/CacheCmd/cache/add_remote 10.93
50 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.39
51 TestFunctional/serial/CacheCmd/cache/list 0.35
54 TestFunctional/serial/CacheCmd/cache/delete 0.73
62 TestFunctional/parallel/ConfigCmd 2.21
64 TestFunctional/parallel/DryRun 12.89
65 TestFunctional/parallel/InternationalLanguage 5.38
71 TestFunctional/parallel/AddonsCmd 3.43
86 TestFunctional/parallel/ProfileCmd/profile_not_create 7.17
87 TestFunctional/parallel/ProfileCmd/profile_list 4.55
88 TestFunctional/parallel/ProfileCmd/profile_json_output 4.58
90 TestFunctional/parallel/Version/short 0.36
93 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
100 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
114 TestFunctional/parallel/ImageCommands/ImageRemove 5.94
117 TestFunctional/delete_addon-resizer_images 2.09
118 TestFunctional/delete_my-image_image 1.1
119 TestFunctional/delete_minikube_cached_images 1.07
125 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 2.8
138 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
144 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
145 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
152 TestErrorJSONOutput 7.42
155 TestKicCustomNetwork/use_default_bridge_network 227.07
158 TestMainNoArgs 0.33
191 TestNoKubernetes/serial/StartNoK8sWithVersion 0.65
192 TestStoppedBinaryUpgrade/Setup 0.62
259 TestStartStop/group/newest-cni/serial/DeployApp 0
260 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.05
272 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
273 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/json-events (17.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220516215532-2444 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220516215532-2444 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (17.5294658s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (17.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220516215532-2444
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220516215532-2444: exit status 85 (754.3461ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/16 21:55:34
	Running on machine: minikube2
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0516 21:55:34.222176    8360 out.go:296] Setting OutFile to fd 612 ...
	I0516 21:55:34.279335    8360 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 21:55:34.279365    8360 out.go:309] Setting ErrFile to fd 664...
	I0516 21:55:34.279365    8360 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0516 21:55:34.300624    8360 root.go:300] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0516 21:55:34.304162    8360 out.go:303] Setting JSON to true
	I0516 21:55:34.307761    8360 start.go:115] hostinfo: {"hostname":"minikube2","uptime":1246,"bootTime":1652736888,"procs":147,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 21:55:34.307929    8360 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 21:55:34.325512    8360 out.go:97] [download-only-20220516215532-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	W0516 21:55:34.325920    8360 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0516 21:55:34.326011    8360 notify.go:193] Checking for updates...
	I0516 21:55:34.328397    8360 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 21:55:34.331636    8360 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 21:55:34.334041    8360 out.go:169] MINIKUBE_LOCATION=12739
	I0516 21:55:34.336667    8360 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0516 21:55:34.341783    8360 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0516 21:55:34.341783    8360 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 21:55:36.901009    8360 docker.go:137] docker version: linux-20.10.14
	I0516 21:55:36.909322    8360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 21:55:38.951242    8360 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0408976s)
	I0516 21:55:38.951242    8360 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-16 21:55:37.9123729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 21:55:38.964045    8360 out.go:97] Using the docker driver based on user configuration
	I0516 21:55:38.964045    8360 start.go:284] selected driver: docker
	I0516 21:55:38.964045    8360 start.go:806] validating driver "docker" against <nil>
	I0516 21:55:38.987512    8360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 21:55:41.011457    8360 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0239383s)
	I0516 21:55:41.011691    8360 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-16 21:55:39.9826417 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 21:55:41.011691    8360 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0516 21:55:41.136522    8360 start_flags.go:373] Using suggested 16300MB memory alloc based on sys=65534MB, container=51405MB
	I0516 21:55:41.137229    8360 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0516 21:55:41.150906    8360 out.go:169] Using Docker Desktop driver with the root privilege
	I0516 21:55:41.153269    8360 cni.go:95] Creating CNI manager for ""
	I0516 21:55:41.153813    8360 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 21:55:41.153856    8360 start_flags.go:306] config:
	{Name:download-only-20220516215532-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220516215532-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 21:55:41.156415    8360 out.go:97] Starting control plane node download-only-20220516215532-2444 in cluster download-only-20220516215532-2444
	I0516 21:55:41.156531    8360 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 21:55:41.158678    8360 out.go:97] Pulling base image ...
	I0516 21:55:41.158731    8360 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0516 21:55:41.158731    8360 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 21:55:41.200399    8360 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0516 21:55:41.200399    8360 cache.go:57] Caching tarball of preloaded images
	I0516 21:55:41.200399    8360 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0516 21:55:41.204706    8360 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0516 21:55:41.204779    8360 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0516 21:55:41.268443    8360 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0516 21:55:42.393140    8360 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 21:55:42.393140    8360 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 21:55:42.393140    8360 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 21:55:42.393140    8360 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 21:55:42.394470    8360 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 21:55:44.523019    8360 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0516 21:55:44.607210    8360 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0516 21:55:45.688154    8360 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0516 21:55:45.689331    8360 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\download-only-20220516215532-2444\config.json ...
	I0516 21:55:45.690050    8360 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\download-only-20220516215532-2444\config.json: {Name:mke0bd24a7c55b46d61e3b341983e775a235c4ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0516 21:55:45.690356    8360 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0516 21:55:45.692540    8360 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	I0516 21:55:49.254456    8360 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 21:55:49.254456    8360 cache.go:206] Successfully downloaded all kic artifacts
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220516215532-2444"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/json-events (13.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220516215532-2444 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220516215532-2444 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker: (13.2761031s)
--- PASS: TestDownloadOnly/v1.23.6/json-events (13.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/preload-exists
--- PASS: TestDownloadOnly/v1.23.6/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/kubectl
--- PASS: TestDownloadOnly/v1.23.6/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/LogsDuration (0.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220516215532-2444
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220516215532-2444: exit status 85 (619.2726ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/16 21:55:51
	Running on machine: minikube2
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0516 21:55:51.086278    4984 out.go:296] Setting OutFile to fd 732 ...
	I0516 21:55:51.141937    4984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 21:55:51.141937    4984 out.go:309] Setting ErrFile to fd 736...
	I0516 21:55:51.141937    4984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0516 21:55:51.151697    4984 root.go:300] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0516 21:55:51.152362    4984 out.go:303] Setting JSON to true
	I0516 21:55:51.154556    4984 start.go:115] hostinfo: {"hostname":"minikube2","uptime":1263,"bootTime":1652736888,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 21:55:51.154556    4984 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 21:55:51.158915    4984 out.go:97] [download-only-20220516215532-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 21:55:51.158915    4984 notify.go:193] Checking for updates...
	I0516 21:55:51.162047    4984 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 21:55:51.164962    4984 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 21:55:51.167399    4984 out.go:169] MINIKUBE_LOCATION=12739
	I0516 21:55:51.170663    4984 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0516 21:55:51.176511    4984 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0516 21:55:51.177972    4984 config.go:178] Loaded profile config "download-only-20220516215532-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0516 21:55:51.178300    4984 start.go:714] api.Load failed for download-only-20220516215532-2444: filestore "download-only-20220516215532-2444": Docker machine "download-only-20220516215532-2444" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0516 21:55:51.178465    4984 driver.go:358] Setting default libvirt URI to qemu:///system
	W0516 21:55:51.178465    4984 start.go:714] api.Load failed for download-only-20220516215532-2444: filestore "download-only-20220516215532-2444": Docker machine "download-only-20220516215532-2444" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0516 21:55:53.761673    4984 docker.go:137] docker version: linux-20.10.14
	I0516 21:55:53.773162    4984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 21:55:55.792284    4984 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0190336s)
	I0516 21:55:55.793145    4984 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-16 21:55:54.7727967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 21:55:55.817616    4984 out.go:97] Using the docker driver based on existing profile
	I0516 21:55:55.818441    4984 start.go:284] selected driver: docker
	I0516 21:55:55.818441    4984 start.go:806] validating driver "docker" against &{Name:download-only-20220516215532-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220516215532-2444 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 21:55:55.842656    4984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 21:55:57.858717    4984 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0159341s)
	I0516 21:55:57.858717    4984 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-16 21:55:56.8388475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 21:55:57.906921    4984 cni.go:95] Creating CNI manager for ""
	I0516 21:55:57.906921    4984 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0516 21:55:57.906921    4984 start_flags.go:306] config:
	{Name:download-only-20220516215532-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:download-only-20220516215532-2444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 21:55:58.011498    4984 out.go:97] Starting control plane node download-only-20220516215532-2444 in cluster download-only-20220516215532-2444
	I0516 21:55:58.011498    4984 cache.go:120] Beginning downloading kic base image for docker with docker
	I0516 21:55:58.015023    4984 out.go:97] Pulling base image ...
	I0516 21:55:58.015175    4984 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 21:55:58.015218    4984 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0516 21:55:58.052305    4984 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 21:55:58.052305    4984 cache.go:57] Caching tarball of preloaded images
	I0516 21:55:58.052990    4984 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0516 21:55:58.055906    4984 out.go:97] Downloading Kubernetes v1.23.6 preload ...
	I0516 21:55:58.055943    4984 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	I0516 21:55:58.124493    4984 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4?checksum=md5:a6c3f222f3cce2a88e27e126d64eb717 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0516 21:55:59.140413    4984 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0516 21:55:59.140413    4984 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 21:55:59.140413    4984 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.31@sha256_c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c.tar
	I0516 21:55:59.140413    4984 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0516 21:55:59.140413    4984 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0516 21:55:59.140413    4984 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0516 21:55:59.141145    4984 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0516 21:56:01.105399    4984 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	I0516 21:56:01.106626    4984 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220516215532-2444"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6/LogsDuration (0.62s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (11.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (11.148716s)
--- PASS: TestDownloadOnly/DeleteAll (11.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (6.98s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-20220516215532-2444
aaa_download_only_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-20220516215532-2444: (6.9834958s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (6.98s)

                                                
                                    
x
+
TestDownloadOnlyKic (45.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-20220516215629-2444 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-20220516215629-2444 --force --alsologtostderr --driver=docker: (36.0208668s)
helpers_test.go:175: Cleaning up "download-docker-20220516215629-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-20220516215629-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-20220516215629-2444: (8.2232737s)
--- PASS: TestDownloadOnlyKic (45.43s)

                                                
                                    
x
+
TestBinaryMirror (16.82s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220516215715-2444 --alsologtostderr --binary-mirror http://127.0.0.1:54029 --driver=docker
aaa_download_only_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220516215715-2444 --alsologtostderr --binary-mirror http://127.0.0.1:54029 --driver=docker: (8.2561794s)
helpers_test.go:175: Cleaning up "binary-mirror-20220516215715-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-20220516215715-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-20220516215715-2444: (8.3200392s)
--- PASS: TestBinaryMirror (16.82s)

                                                
                                    
x
+
TestErrorSpam/start (21.16s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 start --dry-run: (7.0789618s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 start --dry-run: (7.0439129s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 start --dry-run
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 start --dry-run: (7.0330818s)
--- PASS: TestErrorSpam/start (21.16s)

                                                
                                    
x
+
TestErrorSpam/status (8.2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 status
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 status: exit status 7 (2.7143731s)

                                                
                                                
-- stdout --
	nospam-20220516215858-2444
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:00:39.987329    7688 status.go:258] status error: host: state: unknown state "nospam-20220516215858-2444": docker container inspect nospam-20220516215858-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220516215858-2444
	E0516 22:00:39.987329    7688 status.go:261] The "nospam-20220516215858-2444" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220516215858-2444 status" failed: exit status 7
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 status
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 status: exit status 7 (2.7735877s)

                                                
                                                
-- stdout --
	nospam-20220516215858-2444
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:00:42.755527    2180 status.go:258] status error: host: state: unknown state "nospam-20220516215858-2444": docker container inspect nospam-20220516215858-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220516215858-2444
	E0516 22:00:42.755527    2180 status.go:261] The "nospam-20220516215858-2444" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220516215858-2444 status" failed: exit status 7
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 status
error_spam_test.go:179: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 status: exit status 7 (2.7085963s)

                                                
                                                
-- stdout --
	nospam-20220516215858-2444
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:00:45.457060    5244 status.go:258] status error: host: state: unknown state "nospam-20220516215858-2444": docker container inspect nospam-20220516215858-2444 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220516215858-2444
	E0516 22:00:45.457060    5244 status.go:261] The "nospam-20220516215858-2444" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:181: "out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220516215858-2444 status" failed: exit status 7
--- PASS: TestErrorSpam/status (8.20s)

                                                
                                    
x
+
TestErrorSpam/pause (9.11s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 pause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 pause: exit status 80 (3.0744397s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220516215858-2444": docker container inspect nospam-20220516215858-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220516215858-2444
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_201.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220516215858-2444 pause" failed: exit status 80
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 pause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 pause: exit status 80 (3.0241003s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220516215858-2444": docker container inspect nospam-20220516215858-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220516215858-2444
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_201.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220516215858-2444 pause" failed: exit status 80
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 pause
error_spam_test.go:179: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 pause: exit status 80 (3.0116012s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220516215858-2444": docker container inspect nospam-20220516215858-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220516215858-2444
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_201.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:181: "out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220516215858-2444 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (9.11s)

                                                
                                    
x
+
TestErrorSpam/unpause (9.05s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 unpause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 unpause: exit status 80 (3.0135631s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220516215858-2444": docker container inspect nospam-20220516215858-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220516215858-2444
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_201.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220516215858-2444 unpause" failed: exit status 80
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 unpause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 unpause: exit status 80 (3.00503s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220516215858-2444": docker container inspect nospam-20220516215858-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220516215858-2444
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_201.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220516215858-2444 unpause" failed: exit status 80
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 unpause
error_spam_test.go:179: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 unpause: exit status 80 (3.0311493s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220516215858-2444": docker container inspect nospam-20220516215858-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220516215858-2444
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_201.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:181: "out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220516215858-2444 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (9.05s)

                                                
                                    
x
+
TestErrorSpam/stop (66.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 stop
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 stop: exit status 82 (22.140533s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20220516215858-2444"  ...
	* Stopping node "nospam-20220516215858-2444"  ...
	* Stopping node "nospam-20220516215858-2444"  ...
	* Stopping node "nospam-20220516215858-2444"  ...
	* Stopping node "nospam-20220516215858-2444"  ...
	* Stopping node "nospam-20220516215858-2444"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:01:08.965467    8988 daemonize_windows.go:38] error terminating scheduled stop for profile nospam-20220516215858-2444: stopping schedule-stop service for profile nospam-20220516215858-2444: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "nospam-20220516215858-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20220516215858-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220516215858-2444
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20220516215858-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220516215858-2444
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_201.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220516215858-2444 stop" failed: exit status 82
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 stop
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 stop: exit status 82 (21.9168695s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20220516215858-2444"  ...
	* Stopping node "nospam-20220516215858-2444"  ...
	* Stopping node "nospam-20220516215858-2444"  ...
	* Stopping node "nospam-20220516215858-2444"  ...
	* Stopping node "nospam-20220516215858-2444"  ...
	* Stopping node "nospam-20220516215858-2444"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:01:30.946517    6944 daemonize_windows.go:38] error terminating scheduled stop for profile nospam-20220516215858-2444: stopping schedule-stop service for profile nospam-20220516215858-2444: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "nospam-20220516215858-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20220516215858-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220516215858-2444
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20220516215858-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220516215858-2444
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_201.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220516215858-2444 stop" failed: exit status 82
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 stop
error_spam_test.go:179: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220516215858-2444 stop: exit status 82 (21.9516718s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20220516215858-2444"  ...
	* Stopping node "nospam-20220516215858-2444"  ...
	* Stopping node "nospam-20220516215858-2444"  ...
	* Stopping node "nospam-20220516215858-2444"  ...
	* Stopping node "nospam-20220516215858-2444"  ...
	* Stopping node "nospam-20220516215858-2444"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0516 22:01:52.838861    4876 daemonize_windows.go:38] error terminating scheduled stop for profile nospam-20220516215858-2444: stopping schedule-stop service for profile nospam-20220516215858-2444: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "nospam-20220516215858-2444": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20220516215858-2444: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220516215858-2444
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20220516215858-2444 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220516215858-2444
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_201.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:181: "out/minikube-windows-amd64.exe -p nospam-20220516215858-2444 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220516215858-2444 stop" failed: exit status 82
--- PASS: TestErrorSpam/stop (66.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\2444\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 cache add k8s.gcr.io/pause:3.1: (3.7135631s)
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 cache add k8s.gcr.io/pause:3.3: (3.6037168s)
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 cache add k8s.gcr.io/pause:latest: (3.6086003s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 config get cpus: exit status 14 (347.2607ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 config get cpus: exit status 14 (358.9839ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (12.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220516220221-2444 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220516220221-2444 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.4049258s)

                                                
                                                
-- stdout --
	* [functional-20220516220221-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:08:51.763622    3348 out.go:296] Setting OutFile to fd 976 ...
	I0516 22:08:51.838134    3348 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:08:51.838229    3348 out.go:309] Setting ErrFile to fd 668...
	I0516 22:08:51.838229    3348 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:08:51.848305    3348 out.go:303] Setting JSON to false
	I0516 22:08:51.860326    3348 start.go:115] hostinfo: {"hostname":"minikube2","uptime":2044,"bootTime":1652736887,"procs":153,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:08:51.860326    3348 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:08:51.866306    3348 out.go:177] * [functional-20220516220221-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:08:51.869306    3348 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:08:51.872307    3348 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:08:51.874305    3348 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:08:51.877310    3348 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:08:51.879311    3348 config.go:178] Loaded profile config "functional-20220516220221-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:08:51.880319    3348 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:08:54.644635    3348 docker.go:137] docker version: linux-20.10.14
	I0516 22:08:54.652836    3348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:08:56.806309    3348 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1534627s)
	I0516 22:08:56.806309    3348 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-16 22:08:55.7296557 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:08:56.811335    3348 out.go:177] * Using the docker driver based on existing profile
	I0516 22:08:56.814345    3348 start.go:284] selected driver: docker
	I0516 22:08:56.814345    3348 start.go:806] validating driver "docker" against &{Name:functional-20220516220221-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220516220221-2444 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:08:56.814345    3348 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:08:56.873446    3348 out.go:177] 
	W0516 22:08:56.877624    3348 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0516 22:08:56.880611    3348 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220516220221-2444 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:983: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220516220221-2444 --dry-run --alsologtostderr -v=1 --driver=docker: (7.489079s)
--- PASS: TestFunctional/parallel/DryRun (12.89s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220516220221-2444 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220516220221-2444 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.3762047s)

                                                
                                                
-- stdout --
	* [functional-20220516220221-2444] minikube v1.26.0-beta.0 sur Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0516 22:08:44.403221    9040 out.go:296] Setting OutFile to fd 888 ...
	I0516 22:08:44.459570    9040 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:08:44.459570    9040 out.go:309] Setting ErrFile to fd 732...
	I0516 22:08:44.459570    9040 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0516 22:08:44.471097    9040 out.go:303] Setting JSON to false
	I0516 22:08:44.473608    9040 start.go:115] hostinfo: {"hostname":"minikube2","uptime":2036,"bootTime":1652736888,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0516 22:08:44.473608    9040 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0516 22:08:44.478033    9040 out.go:177] * [functional-20220516220221-2444] minikube v1.26.0-beta.0 sur Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0516 22:08:44.481251    9040 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0516 22:08:44.483860    9040 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0516 22:08:44.486281    9040 out.go:177]   - MINIKUBE_LOCATION=12739
	I0516 22:08:44.488282    9040 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0516 22:08:44.491285    9040 config.go:178] Loaded profile config "functional-20220516220221-2444": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0516 22:08:44.492281    9040 driver.go:358] Setting default libvirt URI to qemu:///system
	I0516 22:08:47.193199    9040 docker.go:137] docker version: linux-20.10.14
	I0516 22:08:47.208588    9040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0516 22:08:49.384119    9040 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1755206s)
	I0516 22:08:49.384777    9040 info.go:265] docker info: {ID:VFNF:PK6E:GYN6:VIFG:Y2MF:ZCSI:VHRA:KTKE:Z2AX:SVDU:TRKI:EWUE Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-16 22:08:48.2620503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0516 22:08:49.389694    9040 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0516 22:08:49.391668    9040 start.go:284] selected driver: docker
	I0516 22:08:49.391668    9040 start.go:806] validating driver "docker" against &{Name:functional-20220516220221-2444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220516220221-2444 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0516 22:08:49.392344    9040 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0516 22:08:49.500519    9040 out.go:177] 
	W0516 22:08:49.503080    9040 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0516 22:08:49.506464    9040 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (5.38s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (3.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 addons list: (3.0802801s)
functional_test.go:1631: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (3.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (7.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe profile lis: (3.0652744s)
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (4.1010242s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (7.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (4.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Done: out/minikube-windows-amd64.exe profile list: (4.1733753s)
functional_test.go:1310: Took "4.1736303s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1324: Took "375.5222ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (4.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (4.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (4.2126466s)
functional_test.go:1361: Took "4.2128138s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1374: Took "367.7512ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 version --short
--- PASS: TestFunctional/parallel/Version/short (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-20220516220221-2444 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-20220516220221-2444 tunnel --alsologtostderr] ...
helpers_test.go:506: unable to kill pid 7104: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (5.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image rm gcr.io/google-containers/addon-resizer:functional-20220516220221-2444

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image rm gcr.io/google-containers/addon-resizer:functional-20220516220221-2444: (2.9587324s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220516220221-2444 image ls: (2.982586s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (5.94s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (2.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Done: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: (1.0368813s)
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220516220221-2444
functional_test.go:185: (dbg) Done: docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220516220221-2444: (1.0370059s)
--- PASS: TestFunctional/delete_addon-resizer_images (2.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (1.1s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220516220221-2444
functional_test.go:193: (dbg) Done: docker rmi -f localhost/my-image:functional-20220516220221-2444: (1.0862609s)
--- PASS: TestFunctional/delete_my-image_image (1.10s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (1.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220516220221-2444
functional_test.go:201: (dbg) Done: docker rmi -f minikube-local-cache-test:functional-20220516220221-2444: (1.0546845s)
--- PASS: TestFunctional/delete_minikube_cached_images (1.07s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (2.8s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220516221408-2444 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220516221408-2444 addons enable ingress-dns --alsologtostderr -v=5: (2.7982681s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (2.80s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (7.42s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-20220516221743-2444 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-20220516221743-2444 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (379.9758ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2763d26c-8bda-429d-896e-f988efa1bf57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220516221743-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"00801b43-f6bc-44f1-ba4b-d45f7daa3624","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"14ed7231-5765-4b90-95f2-005c6f25a8e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"2870af2f-a384-49e0-8434-a75962116f2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"05555b63-7cac-4aac-ac5e-1d9f0d097e90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a9dfa018-7d47-484e-99b2-fa76f906b931","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220516221743-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-20220516221743-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-20220516221743-2444: (7.0425869s)
--- PASS: TestErrorJSONOutput (7.42s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (227.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220516222155-2444 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220516222155-2444 --network=bridge: (3m7.1338266s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0713574s)
helpers_test.go:175: Cleaning up "docker-network-20220516222155-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220516222155-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220516222155-2444: (38.8515327s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (227.07s)

                                                
                                    
x
+
TestMainNoArgs (0.33s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220516224650-2444 --no-kubernetes --kubernetes-version=1.20 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220516224650-2444 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (654.3855ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220516224650-2444] minikube v1.26.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220516230100-2444 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220516230100-2444 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.05205s)
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (19/219)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6/binaries (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220516220221-2444 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local4057083407\001
functional_test.go:1069: (dbg) Non-zero exit: docker build -t minikube-local-cache-test:functional-20220516220221-2444 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local4057083407\001: exit status 1 (1.0915476s)

                                                
                                                
** stderr ** 
	#2 [internal] load build definition from Dockerfile
	#2 sha256:2bc6132e72b36683e9ef3b0bc197716f6ea2fafa64c002c5a1c9f11cf5b244f0
	#2 ERROR: failed to create lease: write /var/lib/docker/buildkit/containerdmeta.db: read-only file system
	
	#1 [internal] load .dockerignore
	#1 sha256:2fcc3bd37176e2e601c048476cad18798e28776e6ee2b2849f94485feb0ff865
	#1 ERROR: failed to create lease: write /var/lib/docker/buildkit/containerdmeta.db: read-only file system
	------
	 > [internal] load build definition from Dockerfile:
	------
	------
	 > [internal] load .dockerignore:
	------
	failed to solve with frontend dockerfile.v0: failed to read dockerfile: failed to create lease: write /var/lib/docker/buildkit/containerdmeta.db: read-only file system

                                                
                                                
** /stderr **
functional_test.go:1071: failed to build docker image, skipping local test: exit status 1
--- SKIP: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220516220221-2444 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:908: output didn't produce a URL
functional_test.go:902: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220516220221-2444 --alsologtostderr -v=1] ...
helpers_test.go:488: unable to find parent, assuming dead: process does not exist
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (7.58s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220516230037-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220516230037-2444

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220516230037-2444: (7.5817025s)
--- SKIP: TestStartStop/group/disable-driver-mounts (7.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (7.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220516225301-2444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-20220516225301-2444
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-20220516225301-2444: (7.6243327s)
--- SKIP: TestNetworkPlugins/group/flannel (7.62s)

                                                
                                    
Copied to clipboard